Master Finance Tools - 35% Off CFI (Code CFI35)
Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to accelerate large language model inference and reduce costs by expanding beyond GPU-only execution in this 35-minute conference talk from Ray Summit 2025. Discover how LMCache addresses the critical limitation of KV-cache memory demands that often exceed GPU capacity by enabling KV-cache offloading to diverse datacenter resources including CPU memory, local disk, and remote storage, with dynamic loading back to GPUs on demand. Explore advanced KV-cache machine learning techniques that enable reusing caches for non-prefix text, sharing and reusing caches across different LLMs, and improving inference efficiency for complex, non-sequential workloads. Master strategies for achieving faster inference, lower costs, and improved hardware utilization without modifying model architectures, while understanding how these innovations unlock new flexibility and dramatically reduce GPU memory pressure for scalable performance improvements even with the largest models.
Syllabus
Accelerating vLLM with LMCache | Ray Summit 2025
Taught by
Anyscale