NY State-Licensed Certificates in Design, Coding & AI — Online
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This video explores a critical flaw in reasoning Large Language Models (LLMs) called "MiP-Overthinking," where AI models produce excessively long, redundant responses when faced with ill-posed questions that have missing premises. Learn how this phenomenon affects both reinforcement learning and supervised learning models, wasting the Chain-of-Thought reasoning capabilities in advanced systems like o1, o3, and R1. Discover the research findings that reveal how current training methods fail to encourage efficient thinking, and examine the detailed analyses of reasoning length, overthinking patterns, and critical thinking locations across different LLM types. Based on research by teams from the University of Maryland, Lehigh University, KIIT Bhubaneswar, KIMS Bhubaneswar, and Monash University.
Syllabus
When Smart AI Models Overthink Stupid Data (AI TRAP)
Taught by
Discover AI