Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve

USENIX via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore an innovative approach to optimizing Large Language Model (LLM) inference in this conference talk from OSDI '24. Dive into the challenges of balancing throughput and latency in LLM serving, focusing on the prefill and decode phases of request processing. Learn about Sarathi-Serve, an efficient LLM inference scheduler that introduces chunked-prefills and stall-free scheduling to address the throughput-latency tradeoff. Discover how these techniques significantly improve inference performance across various models and hardware configurations, with detailed examples using Mistral-7B, Yi-34B, and Falcon-180B models. Gain insights into the potential for increased serving capacity and reduced pipeline bubbles in LLM inference systems.

Syllabus

OSDI '24 - Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve

Taught by

USENIX

Reviews

Start your review of Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.