Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

256K Context Window? Forget It! - Understanding the Limitations of Long Context Reasoning in Large Language Models

Discover AI via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical limitations of large language models' long context reasoning capabilities through analysis of groundbreaking research from Korea University. Examine how the traditional "needle-in-a-haystack" benchmark has misled the AI community about LLMs' true performance with extended context windows, and discover why this flawed evaluation method has created a "disaster for RAG" systems. Learn about the new NEEDLECHAIN methodology that reveals the shocking reality of current LLMs' and LRMs' reasoning performance across long contexts, even with models boasting 128K+ token capabilities. Understand the implications for in-context learning (ICL), multi-step reasoning, and retrieval-augmented generation (RAG) systems, and gain insights into why the promise of massive context windows may not deliver the expected benefits for complex reasoning tasks.

Syllabus

256K Context Window? Forget It!

Taught by

Discover AI

Reviews

Start your review of 256K Context Window? Forget It! - Understanding the Limitations of Long Context Reasoning in Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.