Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore strategies for monitoring and mitigating hallucinations in Large Language Models (LLMs) deployed to production. Delve into state-of-the-art solutions for detecting hallucinations, focusing on Uncertainty Quantification and LLM self-evaluation. Learn about leveraging token probabilities to estimate response quality, including simple accuracy estimation and advanced methods for Semantic Uncertainty. Discover how to use LLMs to quantify answer quality and explore cutting-edge algorithms like SelfCheckGPT and LLM-eval. Gain an intuitive understanding of LLM monitoring methods, their strengths and weaknesses, and learn to set up an effective LLM monitoring system. Topics covered include an introduction to LLM monitoring, consistency-based and answer evaluation-based hallucination detection, output uncertainty quantification, semantic uncertainty quantification, and experimental results.
Syllabus
Introduction
What is LLM Monitoring
LLM-Based Hallucination Detection: Consistency
LLM-Based Hallucination Detection: Answer Evaluation
Output Uncertainty Quantification
Semantic Uncertainty Quantification
Experiment Results
Taught by
Data Science Dojo