Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

DEEPSERVE - Serverless Large Language Model Serving at Scale

USENIX via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about DEEPSERVE, a scalable serverless AI platform designed for efficient large language model serving in cloud environments, presented at USENIX ATC '25. Discover how this production system addresses critical challenges including resource allocation, serving efficiency, and cold start latencies through four key design components: a request-job-task serverless abstraction model for managing diverse AI workloads, an integrated FLOWSERVE serving engine with microkernel-inspired design and NPU-centric execution, novel scheduling policies for both PD-disaggregated and PD-colocated instances, and optimization techniques like pre-warmed pods, DRAM pre-loading, and NPU-fork that enable scaling to 64 instances within seconds. Explore the technical implementation details of this system that has been operating in production for over a year on a large Ascend NPU cluster, providing industry-standard APIs for fine-tuning, agent serving, and model serving to enterprise customers, as presented by researchers from Peking University and Huawei Cloud.

Syllabus

USENIX ATC '25 - DEEPSERVE: Serverless Large Language Model Serving at Scale

Taught by

USENIX

Reviews

Start your review of DEEPSERVE - Serverless Large Language Model Serving at Scale

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.