Inferencing LLMs in Production with Kubernetes and KubeFlow
CNCF [Cloud Native Computing Foundation] via YouTube
The Private Equity Associate Certification
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to deploy Large Language Models (LLMs) reliably, cost-effectively, and at scale in production environments using Kubernetes and KubeFlow in this 20-minute conference talk. Discover the challenges of operationalizing LLM inference and explore practical solutions for building resilient, scalable, and observable GenAI infrastructure. Walk through the process of leveraging open-source and cloud-native tools to create production-ready LLM deployment pipelines. Gain insights into best practices for managing computational resources, ensuring reliability, and maintaining observability when running LLMs in Kubernetes clusters. Explore how KubeFlow can streamline the machine learning workflow for large language model inference, from model serving to monitoring and scaling. Understand the architectural considerations and operational strategies needed to successfully deploy and maintain LLM services in cloud-native environments.
Syllabus
Inferencing LLMs in production with Kubernetes and KubeFlow - Chamod Perera & Suresh Peiris
Taught by
CNCF [Cloud Native Computing Foundation]