Launch a New Career with Certificates from Google, IBM & Microsoft
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to accelerate large language model inference and reduce costs by expanding beyond GPU-only execution in this 35-minute conference talk from Ray Summit 2025. Discover how LMCache addresses the critical limitation of KV-cache memory demands that often exceed GPU capacity by enabling KV-cache offloading to diverse datacenter resources including CPU memory, local disk, and remote storage, with dynamic loading back to GPUs on demand. Explore advanced KV-cache machine learning techniques that enable reusing caches for non-prefix text, sharing and reusing caches across different LLMs, and improving inference efficiency for complex, non-sequential workloads. Master strategies for achieving faster inference, lower costs, and improved hardware utilization without modifying model architectures, while understanding how these innovations unlock new flexibility and dramatically reduce GPU memory pressure for scalable performance improvements even with the largest models.
Syllabus
Accelerating vLLM with LMCache | Ray Summit 2025
Taught by
Anyscale