Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore practical strategies for building effective large language model evaluations in real-world enterprise environments through this conference talk by Arize AI founder Aparna Dhinakaran. Learn to distinguish between general LLM model evaluation and task-specific system evaluation, addressing the widespread confusion around what "LLM evals" actually means in practice. Discover rigorous techniques for objectively evaluating both different foundation models and custom LLM systems, particularly when building solutions that integrate with multiple models or tools. Examine live research findings and walk through concrete examples of constructing LLM evaluations from scratch, drawing from research that has garnered millions of views across social platforms. Gain insights into building robust task-specific evaluations using open source tools and understand which foundation models work best for specific enterprise use cases. Master techniques to better understand the limits and capabilities of LLM systems as nearly two-thirds of enterprise developers prepare for production deployments this year.
Syllabus
Lessons from the Trenches: Building LLM Evals That Work IRL: Aparna Dhinkaran
Taught by
AI Engineer