Is It Too Much to Ask for a Stable Baseline? - Evaluation and Monitoring in Machine Learning Systems
MLOps World: Machine Learning in Production via YouTube
Lead AI-Native Products with Microsoft's Agentic AI Program
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the challenges of establishing stable baselines in machine learning systems through this 41-minute conference talk from MLOps World: Machine Learning in Production. Delivered by D. Sculley, CEO of Kaggle, delve into the critical role of evaluation and monitoring in reliable ML systems. Examine the difficulties in finding stable reference points, reliable comparison baselines, and effective performance metrics in an environment characterized by changing conditions, feedback loops, and shifting distributions. Investigate how these challenges manifest in traditional settings like click-through prediction and consider their potential impact on emerging fields such as productionized LLMs and generative models.
Syllabus
Is it too much to ask for a stable baseline?
Taught by
MLOps World: Machine Learning in Production