Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Why Language Models Hallucinate

Institute for Advanced Study via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the phenomenon of hallucination in large language models through this computer science seminar presented by Adam Kalai from OpenAI at the Institute for Advanced Study. Examine why LLMs generate plausible but factually incorrect statements, moving beyond viewing these as mysterious architectural failures to understanding them as predictable consequences of standard training practices. Learn how hallucinations can be conceptualized as classification errors, where models produce false statements when they cannot reliably distinguish truth from falsehood rather than expressing uncertainty. Discover how current benchmark optimization encourages guessing over abstention, as evaluation metrics typically penalize uncertainty expression. Investigate potential mitigation strategies through benchmark revision that rewards calibrated abstention, thereby realigning incentives in model development. This research collaboration with Santosh Vempala from Georgia Tech and Ofir Nachum & Edwin Zhang from OpenAI provides insights into improving the reliability and trustworthiness of language model outputs.

Syllabus

11:00am|Simonyi Hall 101 and Remote Access

Taught by

Institute for Advanced Study

Reviews

Start your review of Why Language Models Hallucinate

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.