Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Accelerating vLLM with LMCache

Anyscale via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to accelerate large language model inference and reduce costs by expanding beyond GPU-only execution in this 35-minute conference talk from Ray Summit 2025. Discover how LMCache addresses the critical limitation of KV-cache memory demands that often exceed GPU capacity by enabling KV-cache offloading to diverse datacenter resources including CPU memory, local disk, and remote storage, with dynamic loading back to GPUs on demand. Explore advanced KV-cache machine learning techniques that enable reusing caches for non-prefix text, sharing and reusing caches across different LLMs, and improving inference efficiency for complex, non-sequential workloads. Master strategies for achieving faster inference, lower costs, and improved hardware utilization without modifying model architectures, while understanding how these innovations unlock new flexibility and dramatically reduce GPU memory pressure for scalable performance improvements even with the largest models.

Syllabus

Accelerating vLLM with LMCache | Ray Summit 2025

Taught by

Anyscale

Reviews

Start your review of Accelerating vLLM with LMCache

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.