Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

MIT's Recursive Language Models - Understanding the Phase Shift from LLMs to RLMs

Discover AI via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore MIT's groundbreaking research on Recursive Language Models (RLMs) that challenges the conventional approach to large language models and their supposed "infinite" context windows. Discover how researchers Alex L. Zhang, Tim Kraska, and Omar Khattab from MIT CSAIL have identified "Context Rot" as a critical flaw that destroys reasoning capabilities as input scales increase, and learn about their revolutionary solution that treats RLMs as a Neurosymbolic Operating System. Understand how this system mechanically splits massive datasets by writing Python code and recursively spawning fresh model instances to process them, resulting in dramatic performance improvements where RLM(GPT-5) achieves 58% accuracy on quadratic complexity tasks compared to base GPT-5's below 0.1% performance. Examine the mechanics of "Inference-Time Scaling" and why this breakthrough signals a fundamental shift away from static LLMs toward dynamic, recursive processing systems that could reshape the future of artificial intelligence reasoning capabilities.

Syllabus

Forget LLM: MIT's New RLM (Phase Shift in AI)

Taught by

Discover AI

Reviews

Start your review of MIT's Recursive Language Models - Understanding the Phase Shift from LLMs to RLMs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.