Inferencing LLMs in Production with Kubernetes and KubeFlow
CNCF [Cloud Native Computing Foundation] via YouTube
The Most Addictive Python and SQL Courses
Learn EDR Internals: Research & Development From The Masters
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to deploy Large Language Models (LLMs) reliably, cost-effectively, and at scale in production environments using Kubernetes and KubeFlow in this 20-minute conference talk. Discover the challenges of operationalizing LLM inference and explore practical solutions for building resilient, scalable, and observable GenAI infrastructure. Walk through the process of leveraging open-source and cloud-native tools to create production-ready LLM deployment pipelines. Gain insights into best practices for managing computational resources, ensuring reliability, and maintaining observability when running LLMs in Kubernetes clusters. Explore how KubeFlow can streamline the machine learning workflow for large language model inference, from model serving to monitoring and scaling. Understand the architectural considerations and operational strategies needed to successfully deploy and maintain LLM services in cloud-native environments.
Syllabus
Inferencing LLMs in production with Kubernetes and KubeFlow - Chamod Perera & Suresh Peiris
Taught by
CNCF [Cloud Native Computing Foundation]