Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about the surprising limitations of Large Language Models (LLMs) in handling context length in this 15-minute video that reveals how two-thirds of current LLMs struggle with 2,000 token inputs as of January 2024. Explore detailed performance testing using a 741-word prompt (1,254 tokens) and discover how open-source LLMs unexpectedly outperform major commercial models. Examine the technical implications for RAG (Retrieval-Augmented Generation) systems and gain insights into the "Lost in the Middle" phenomenon, complete with benchmark data and performance comparisons across different LLM implementations.
Syllabus
LLMs FAIL at 2K context length - Yours Too?
Taught by
Discover AI