Expanded Blueprint - Best Practice Reference Architecture for AI-Enabled Workloads
OpenInfra Foundation via YouTube
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to build a comprehensive reference architecture for AI-enabled workloads using open-source technologies in this 35-minute conference talk. Discover how to combine Linux, OpenStack, and Kubernetes to create an agile, scalable, and resilient infrastructure foundation for modern distributed systems and application development. Explore the integration of locally-running Large Language Models (LLMs) through tools like Ollama, deployed in VMs or Kubernetes pods, to maintain data sovereignty while reducing costs and enabling domain-specific AI customization. Understand how this open-source stack eliminates vendor lock-in by building on open interfaces rather than proprietary platforms, giving organizations flexibility in choosing distributions, cloud providers, orchestrators, and LLMs. Examine practical implementation strategies for on-premise generative AI, intelligent automation, and data-driven decision making, with the Open Telekom Cloud serving as a real-world example of this architectural approach.
Syllabus
Expanded Blueprint: Best Practice Reference Architecture for AI-enabled Workloads
Taught by
OpenInfra Foundation