Serve a Text to Speech Model with vLLM

Serve a Text to Speech Model with vLLM

Trelis Research via YouTube Direct link

0:00 Serving Orpheus Text-to-Speech model with continuous batching

1 of 6

1 of 6

0:00 Serving Orpheus Text-to-Speech model with continuous batching

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Serve a Text to Speech Model with vLLM

Automatically move to the next video in the Classroom when playback concludes

  1. 1 0:00 Serving Orpheus Text-to-Speech model with continuous batching
  2. 2 0:44 Setup Demo with a one-click template from Runpod
  3. 3 4:12 Running inference on a fine-tuned model poor quality, maybe don’t use fp8, and tune more
  4. 4 5:25 Inference on the default orpheus model, “tara”
  5. 5 7:37 How vLLM works with Orpheus and how to decode audio tokens
  6. 6 12:38 Conclusion and Resources

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.