Learn Generative AI, Prompt Engineering, and LLMs for Free
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn Evaluation-Driven Development methodology in this 28-minute conference talk that addresses quality as the primary barrier preventing Agentic applications from reaching production. Discover how to use evaluation as the foundation for building high-quality, reliable Agentic systems through practical demonstrations with MLflow 3.0, the redesigned MLOps platform optimized for the LLM era. Explore key features including one-line observability, automatic evaluation capabilities, human-in-the-loop feedback mechanisms, and comprehensive monitoring solutions. Gain insights from Yuki Watanabe, a software engineer at Databricks and core maintainer of MLflow, who brings extensive experience in machine learning systems, scalable web services, and ML infrastructure design. Understand how to bridge the gap between data science and engineering while enabling seamless deployment of AI-driven features at scale in production environments.
Syllabus
Evaluation-Driven Development with MLflow 3.0
Taught by
MLOps.community