Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Context Rot - How Increasing Input Tokens Impacts LLM Performance

Yannic Kilcher via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive paper analysis examining how Large Language Models' performance degrades as input context length increases, challenging the common assumption that LLMs process all tokens uniformly. Delve into research findings from Chroma that evaluate 18 state-of-the-art models including GPT-4.1, Claude 4, Gemini 2.5, and Qwen3, revealing significant performance variations based on input length even for simple tasks. Learn about the concept of "context rot" - the phenomenon where model reliability decreases as more tokens are added to the input context. Understand the methodology used to test these models across different context lengths and discover the implications for real-world applications where long-context processing is crucial. Examine specific examples and data points that demonstrate how models struggle with longer inputs, and consider the practical consequences for applications requiring extensive context understanding. Gain insights into the current limitations of transformer architectures when dealing with extended sequences and explore potential solutions or workarounds for mitigating context degradation effects.

Syllabus

Context Rot: How Increasing Input Tokens Impacts LLM Performance (Paper Analysis)

Taught by

Yannic Kilcher

Reviews

Start your review of Context Rot - How Increasing Input Tokens Impacts LLM Performance

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.