Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How METR Measures Long Tasks and Experienced Open Source Dev Productivity

AI Engineer via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the disconnect between impressive AI benchmark performance and real-world developer productivity through METR's comprehensive research findings. Examine why AI models that excel on benchmarks like SWE-bench fail to accelerate experienced developers' work in field studies, despite rising time horizon measurements. Analyze the gap between laboratory AI capabilities and practical implementation challenges, investigating factors such as reliability requirements, task distribution variations, and capability elicitation methods. Discover insights from METR's time horizon measurements and randomized controlled trials with developers to understand the complexities of translating AI performance metrics into tangible productivity gains. Learn about the implications for automated AI research and development, and gain perspective on how benchmark scores may not accurately reflect real-world AI utility in software development contexts.

Syllabus

How METR measures Long Tasks and Experienced Open Source Dev Productivity - Joel Becker, METR

Taught by

AI Engineer

Reviews

Start your review of How METR Measures Long Tasks and Experienced Open Source Dev Productivity

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.