Finance Certifications Goldman Sachs & Amazon Teams Trust
Learn AI, Data Science & Business — Earn Certificates That Get You Hired
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Dive into a comprehensive 51-minute talk on building and maintaining high-performance AI models. Explore the challenges of developing and sustaining model performance in production environments, addressing issues like model decay and real-world changes. Learn about essential performance metrics, identifying model degradation, and tackling data and concept drift. Gain insights into systematic testing, debugging, and monitoring techniques for AI models. The lecture covers conceptual foundations and includes practical demonstrations using real models. Discover key topics such as optimal testing points in ML model development, types of performance and drift testing, and strategies for systematic model improvement. Follow along with a detailed breakdown of content, including examples of addressing concept drift and data pipeline issues, performance debugging in action, and considerations for testing and monitoring Large Language Models (LLMs).
Syllabus
Introduction
Managing ML Model Performance is a huge problem
What we often hear
Why is frequent retraining not sufficient?
Why is alerting alone not sufficient?
Observe and iterate
Fundamental #1: Observe & Iterate
Example: Addressing concept drift
Example: Addressing data pipeline issue
Debug rapidly
Performance Debugging In Action
Data pipeline issue for Latitude Feature!
Test and Monitor LLMs
Key Takeways
Taught by
Data Science Dojo