Learn Backend Development Part-Time, Online
AI Product Expert Certification - Master Generative AI Skills
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn Evaluation-Driven Development methodology in this 28-minute conference talk that addresses quality as the primary barrier preventing Agentic applications from reaching production. Discover how to use evaluation as the foundation for building high-quality, reliable Agentic systems through practical demonstrations with MLflow 3.0, the redesigned MLOps platform optimized for the LLM era. Explore key features including one-line observability, automatic evaluation capabilities, human-in-the-loop feedback mechanisms, and comprehensive monitoring solutions. Gain insights from Yuki Watanabe, a software engineer at Databricks and core maintainer of MLflow, who brings extensive experience in machine learning systems, scalable web services, and ML infrastructure design. Understand how to bridge the gap between data science and engineering while enabling seamless deployment of AI-driven features at scale in production environments.
Syllabus
Evaluation-Driven Development with MLflow 3.0
Taught by
MLOps.community