Completed
[00:00] Cracking Open System Failures and How We Fix Them
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Product Metrics are LLM Evals - Making AI Products More Accurate and Reliable
Automatically move to the next video in the Classroom when playback concludes
- 1 [00:00] Cracking Open System Failures and How We Fix Them
- 2 [05:44] LLMs in the Wild — First Steps and Growing Pains
- 3 [08:28] Building the Backbone of Tracing and Observability
- 4 [13:02] Tuning the Dials for Peak Model Performance
- 5 [13:51] From Growing Pains to Glowing Gains in AI Systems
- 6 [17:26] Where Prompts Meet Psychology and Code
- 7 [22:40] Why Data Experts Deserve a Seat at the Table
- 8 [24:59] Humanloop and the Art of Configuration Taming
- 9 [28:23] What Actually Matters in Customer-Facing AI
- 10 [33:43] Starting Fresh with Private Models That Deliver
- 11 [34:58] How LLM Agents Are Changing the Way We Talk
- 12 [39:23] The Secret Lives of Prompts Inside Frameworks
- 13 [42:58] Streaming Showdowns — Creativity vs. Convenience
- 14 [46:26] Meet Our Auto-Tuning AI Prototype
- 15 [49:25] Building the Blueprint for Smarter AI
- 16 [51:24] Feedback Isn’t Optional — It’s Everything