Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

AI Agents Reasoning Collapse - Limits of Emergent Reasoning in Large Language Models

Discover AI via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical limitations of AI agent reasoning capabilities through research findings from Carnegie Mellon University and UC Berkeley that challenge current assumptions about large language model performance in deterministic problem-solving scenarios. Examine the controversial question of investing in expensive AI hardware like the NVIDIA DGX Spark with GB10 Grace Blackwell Superchip while understanding the fundamental reasoning failures that may impact such investments. Analyze comprehensive research demonstrating how large language models fail to maintain reasoning performance even when provided with environmental interfaces for complex problems like the Tower of Hanoi, revealing that access to external tools does not prevent or delay performance collapse. Discover how LLM-parameterized policy analysis shows increasing divergence from both optimal and random policies, indicating mode-like collapse at each complexity level. Review the complete research methodology including GitHub code implementations for testing LLM reasoning capabilities, and understand the implications of these findings for AI agent development and commercial AI applications.

Syllabus

AI Agents Reasoning Collapse Imminent (CMU, Berkeley)

Taught by

Discover AI

Reviews

Start your review of AI Agents Reasoning Collapse - Limits of Emergent Reasoning in Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.