Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about cutting-edge techniques for reducing hallucinations in large language models in this 19-minute talk by MIT CSAIL PhD student Yung-Sung Chuang. Explore three complementary approaches to improve factual reliability: DoLa, a decoding method that contrasts output distributions between transformer layers to enhance truthfulness; Lookback Lens, which detects contextual hallucinations using only attention maps with strong transfer capabilities across tasks and model sizes; and SelfCite, a self-supervised framework that trains LLMs to generate fine-grained citations through context ablation. Discover how these lightweight, scalable solutions work together to significantly improve the factual reliability and verifiability of LLM outputs, with SelfCite achieving performance comparable to Claude Citations using only an 8B model.