Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn how to use both evaluations and experiments in your AI development lifecycle to avoid costly mistakes and ship more successful projects in this 25-minute conference talk. Discover strategies for building the capability to learn quickly and fail cheaply when developing AI initiatives, which often face high failure rates, expensive development costs, and measurement challenges. Explore how to go beyond upstream qualitative evaluations by incorporating downstream quantitative experiments that shorten feedback loops and enable rapid course correction. Master the art of running experiments at scale using both basic and advanced A/B testing approaches specifically designed for AI development. Understand the distinct purposes of evaluations versus experiments and why your development process needs both approaches to succeed. Gain insights into handling the inherently low success rate of AI initiatives through strategic failure management and cost-effective iteration. Learn to connect model performance metrics with business outcomes to create more successful AI projects. Discover how to use error analysis to define and measure evaluators effectively, ensuring your AI initiatives deliver measurable value while minimizing financial risk and development time.
Syllabus
From evals to experiments: How to ship successful AI initiatives by failing cheaply | Ryan Lucht
Taught by
LeadDev