AI Adoption - Drive Business Value and Organizational Impact
Foundations for Product Management Success
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to engineer trillion-scale synthetic data systems for modern language model training through a conference talk from Ray Summit 2025. Discover how DatologyAI built a production-grade platform using KubeRay and vLLM to process trillions of tokens, addressing the critical bottleneck of high-quality training data in advancing AI systems. Explore the essential role of synthetic data in creating diverse, targeted datasets that complement organic sources for everything from fast 4.5B-parameter models to frontier-scale systems like GPT-5. Master key engineering techniques including driving vLLM inference to near-peak GPU utilization, designing fault-tolerant Ray actors for tensor-parallel sharding, and auto-scaling KubeRay clusters to match shifting workload patterns. Understand storage and scheduling strategies that deliver high performance and cost efficiency while dynamically orchestrating thousands of GPU workers across multimodal tasks including recaptioning, rephrasing, and domain-specific content generation. Gain insights into practical patterns for building resilient, scalable ML infrastructure and learn how clean abstractions between Ray's distributed computing layer and vLLM's inference engine enable rapid iteration on prompt engineering while maintaining production stability. See how research prototypes evolve into trillion-token synthetic datasets that define the next generation of AI capabilities.
Syllabus
KubeRay + vLLM at DatalogyAI: Engineering Trillion-Scale Synthetic Data Systems | Ray Summit 2025
Taught by
Anyscale