Get 50% Off Udacity Nanodegrees — Code CC50
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the cutting-edge evaluation of coding AI through LiveCodeBench PRO, a revolutionary benchmark that challenges large language models with competitive programming problems judged by Olympic medalists. Discover how this 17-minute video examines whether coding LLMs should be optimized for reasoning capabilities or tool usage, moving beyond traditional benchmarks like HumanEval to establish new frontiers in code AI assessment. Learn about the collaborative research from leading institutions including New York University, Princeton University, and UC San Diego that introduces the "Grandmaster's Gauntlet" - a comprehensive testing framework where elite competitive programmers evaluate AI performance on complex algorithmic challenges. Understand the methodology behind this groundbreaking benchmark that represents the next evolution in measuring artificial intelligence capabilities in programming and problem-solving contexts.
Syllabus
Optimize Coding LLM for Reasoning or Tools?
Taught by
Discover AI