Finance Certifications Goldman Sachs & Amazon Teams Trust
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the phenomenon of hallucinations in language models through this insightful lecture by Adam Kalai from OpenAI. Delve into how calibration, a process naturally encouraged during pre-training, can lead to unexpected hallucinations. Examine the relationship between hallucination rates and domains using the Good-Turing estimator, with a particular focus on notorious sources like paper titles. Gain valuable insights into potential methods for mitigating hallucinations in AI language models. This hour-long talk, part of the Emerging Generalization Settings series at the Simons Institute, presents joint research with Santosh Vempala conducted while Kalai was at Microsoft Research New England.
Syllabus
When calibration goes awry: hallucination in language models
Taught by
Simons Institute