Product Metrics are LLM Evals - Making AI Products More Accurate and Reliable

Product Metrics are LLM Evals - Making AI Products More Accurate and Reliable

MLOps.community via YouTube Direct link

[00:00] Cracking Open System Failures and How We Fix Them

1 of 16

1 of 16

[00:00] Cracking Open System Failures and How We Fix Them

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Product Metrics are LLM Evals - Making AI Products More Accurate and Reliable

Automatically move to the next video in the Classroom when playback concludes

  1. 1 [00:00] Cracking Open System Failures and How We Fix Them
  2. 2 [05:44] LLMs in the Wild — First Steps and Growing Pains
  3. 3 [08:28] Building the Backbone of Tracing and Observability
  4. 4 [13:02] Tuning the Dials for Peak Model Performance
  5. 5 [13:51] From Growing Pains to Glowing Gains in AI Systems
  6. 6 [17:26] Where Prompts Meet Psychology and Code
  7. 7 [22:40] Why Data Experts Deserve a Seat at the Table
  8. 8 [24:59] Humanloop and the Art of Configuration Taming
  9. 9 [28:23] What Actually Matters in Customer-Facing AI
  10. 10 [33:43] Starting Fresh with Private Models That Deliver
  11. 11 [34:58] How LLM Agents Are Changing the Way We Talk
  12. 12 [39:23] The Secret Lives of Prompts Inside Frameworks
  13. 13 [42:58] Streaming Showdowns — Creativity vs. Convenience
  14. 14 [46:26] Meet Our Auto-Tuning AI Prototype
  15. 15 [49:25] Building the Blueprint for Smarter AI
  16. 16 [51:24] Feedback Isn’t Optional — It’s Everything

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.