Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Serving Voice AI at $1/hr - Open-source, LoRAs, Latency, Load Balancing

AI Engineer via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to deploy real-time text-to-speech AI systems at scale for just $1 per hour through this conference talk from the AI Engineer World's Fair. Discover the production deployment experience of Orpheus, an emotive real-time TTS system, covering critical aspects of latency optimization, high-fidelity voice cloning with practical examples, and load balancing across multiple GPUs using multiple LoRAs. Explore key challenges in voice cloning technology and understand how to manage latency issues including the "Head of Line Silence" problem. Gain insights into infrastructure design for batch inference, learn about leveraging vLLM and dynamic quantization techniques, and understand load balancing implementation using consistent hash ring architecture. The presentation covers system architecture overview and concludes with open source recommendations, providing practical knowledge for building cost-effective, real-time voice AI applications at production scale.

Syllabus

00:00 Introduction to Gabber and Real-Time AI
02:15 Gabber's Mission for Consumer AI
04:17 The Orpheus Voice Model
05:43 Challenges in Voice Cloning
07:44 Latency Management and "Head of Line Silence"
11:07 Infrastructure for Batch Inference
11:36 Leveraging vLLM and Dynamic Quantization
13:21 Load Balancing with a Consistent Hash Ring
14:17 System Architecture Overview
15:07 Conclusion and Open Source Shout-outs

Taught by

AI Engineer

Reviews

Start your review of Serving Voice AI at $1/hr - Open-source, LoRAs, Latency, Load Balancing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.