Get 20% off all career paths from fullstack to AI
Pass the PMP® Exam on Your First Try — Expert-Led Training
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the phenomenon of hallucination in large language models through this computer science seminar presented by Adam Kalai from OpenAI at the Institute for Advanced Study. Examine why LLMs generate plausible but factually incorrect statements, moving beyond viewing these as mysterious architectural failures to understanding them as predictable consequences of standard training practices. Learn how hallucinations can be conceptualized as classification errors, where models produce false statements when they cannot reliably distinguish truth from falsehood rather than expressing uncertainty. Discover how current benchmark optimization encourages guessing over abstention, as evaluation metrics typically penalize uncertainty expression. Investigate potential mitigation strategies through benchmark revision that rewards calibrated abstention, thereby realigning incentives in model development. This research collaboration with Santosh Vempala from Georgia Tech and Ofir Nachum & Edwin Zhang from OpenAI provides insights into improving the reliability and trustworthiness of language model outputs.
Syllabus
11:00am|Simonyi Hall 101 and Remote Access
Taught by
Institute for Advanced Study