Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore Uber's comprehensive AI and machine learning strategy in this 45-minute podcast episode featuring Kai Wang, product lead of Uber's AI platform team. Discover how Uber built and scaled Michelangelo, their internal end-to-end ML platform that powers 100% of the company's business-critical machine learning use cases. Learn about Uber's transition from predictive to generative AI, including how they implement smarter algorithms for Uber Eats, use ML for feedback summarization, and deploy generative AI features that users actually notice. Understand the technical architecture behind inference at scale, the development of Uber's AI Studio, and strategies for building faster AI agents with reduced complexity. Gain insights into Uber's model evaluation processes, their decision to open-source parts of their AI infrastructure including Machanjo, and the organizational factors that drive their AI team's success. The discussion covers practical topics like developer tool selection, measuring development speed effectively, strategic shifts in ML implementation, and the evolution from traditional predictive models to modern generative AI applications across Uber's platform.
Syllabus
[00:00] Rethinking AI Beyond ChatGPT
[04:01] How Devs Pick Their Tools
[08:25] Measuring Dev Speed Smartly
[10:14] Predictive Models at Uber
[13:11] When ML Strategy Shifts
[15:56] Smarter Uber Eats with AI
[19:29] Summarizing Feedback with ML
[23:27] GenAI That Users Notice
[27:19] Inference at Scale: Michelangelo
[32:26] Building Uber’s AI Studio
[33:50] Faster AI Agents, Less Pain
[39:21] Evaluating Models at Uber
[42:22] Why Uber Open-Sourced Machanjo
[44:32] What Fuels Uber’s AI Team
Taught by
MLOps.community