Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing Inference for Voice Models in Production

AI Engineer via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to optimize voice model inference for production environments to achieve time to first byte (TTFB) below 150 milliseconds while maintaining scalability. Discover how open-source text-to-speech models like Orpheus utilize LLM backbones that enable the application of familiar optimization tools including TensorRT-LLM and FP8 quantization for low-latency serving. Explore the fundamental mechanics of TTS inference and identify common pitfalls to avoid when integrating voice models into production systems. Understand how client code, network infrastructure, and other factors outside the GPU can introduce latency into the production stack. Examine strategies for extending high-performance systems to serve customized models with voice cloning and fine-tuning capabilities, providing practical insights for deploying voice AI solutions at scale.

Syllabus

Optimizing inference for voice models in production - Philip Kiely, Baseten

Taught by

AI Engineer

Reviews

Start your review of Optimizing Inference for Voice Models in Production

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.