Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the manipulative tactics behind AI benchmarks that control billions in investment and discover how to build meaningful evaluations in this 11-minute conference talk. Learn about three major "cheat codes" companies use to game benchmarks: cherry-picking comparisons as seen in xAI's selective Grok-3 graphs, buying privileged access like OpenAI's FrontierMath funding, and optimizing for style over substance demonstrated by Meta's 27 Llama-4 variants on LM Arena. Understand why Goodhart's Law guarantees benchmark failure when massive financial stakes are involved and why current AI evaluation methods face a crisis of reliability. Discover how to identify benchmark manipulation through real-world examples, why 39% of score variance stems from writing style rather than actual capability, and implement a 5-step framework for creating evaluations that matter for your specific use case. Master pre-deployment evaluation loops that distinguish reliable AI systems from those requiring constant troubleshooting, drawing from practical experience in building evaluation systems at Waymo, Uber ATG, and SpaceX where poor evaluations have literal crash consequences. Stop participating in rigged benchmark games and start measuring what truly impacts your AI applications.