Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Inferencing LLMs in Production with Kubernetes and KubeFlow

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to deploy Large Language Models (LLMs) reliably, cost-effectively, and at scale in production environments using Kubernetes and KubeFlow in this 20-minute conference talk. Discover the challenges of operationalizing LLM inference and explore practical solutions for building resilient, scalable, and observable GenAI infrastructure. Walk through the process of leveraging open-source and cloud-native tools to create production-ready LLM deployment pipelines. Gain insights into best practices for managing computational resources, ensuring reliability, and maintaining observability when running LLMs in Kubernetes clusters. Explore how KubeFlow can streamline the machine learning workflow for large language model inference, from model serving to monitoring and scaling. Understand the architectural considerations and operational strategies needed to successfully deploy and maintain LLM services in cloud-native environments.

Syllabus

Inferencing LLMs in production with Kubernetes and KubeFlow - Chamod Perera & Suresh Peiris

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Inferencing LLMs in Production with Kubernetes and KubeFlow

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.