Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Cloud-Native Model Serving - vLLM's Lifecycle in Kubernetes

DevConf via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the deployment and management of Large Language Models in Kubernetes environments through this 36-minute conference talk from DevConf.US 2025. Learn how vLLM, a leading open-source project for LLM inference serving, maximizes throughput while minimizing resource usage through its unique features including dynamic batching and distributed serving. Discover the complete lifecycle of deploying AI/LLM workloads on Kubernetes, covering seamless containerization techniques, efficient scaling strategies using Kubernetes-native tools, and robust monitoring practices to ensure reliable operations. Understand how vLLM simplifies complex AI workloads and optimizes performance to make advanced inference accessible for diverse and demanding use cases. Gain insights into integrating vLLM with Kubernetes to build reliable, cost-effective, and high-performance AI systems that drive innovation in scalable LLM deployment.

Syllabus

Cloud-Native Model Serving: vLLM's Lifecycle in Kubernetes - DevConf.US 2025

Taught by

DevConf

Reviews

Start your review of Cloud-Native Model Serving - vLLM's Lifecycle in Kubernetes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.