Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Hallucination-Free LLMs: Strategies for Monitoring and Mitigation

Data Science Dojo via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore strategies for monitoring and mitigating hallucinations in Large Language Models (LLMs) deployed to production. Delve into state-of-the-art solutions for detecting hallucinations, focusing on Uncertainty Quantification and LLM self-evaluation. Learn about leveraging token probabilities to estimate response quality, including simple accuracy estimation and advanced methods for Semantic Uncertainty. Discover how to use LLMs to quantify answer quality and explore cutting-edge algorithms like SelfCheckGPT and LLM-eval. Gain an intuitive understanding of LLM monitoring methods, their strengths and weaknesses, and learn to set up an effective LLM monitoring system. Topics covered include an introduction to LLM monitoring, consistency-based and answer evaluation-based hallucination detection, output uncertainty quantification, semantic uncertainty quantification, and experimental results.

Syllabus

Introduction
What is LLM Monitoring
LLM-Based Hallucination Detection: Consistency
LLM-Based Hallucination Detection: Answer Evaluation
Output Uncertainty Quantification
Semantic Uncertainty Quantification
Experiment Results

Taught by

Data Science Dojo

Reviews

Start your review of Hallucination-Free LLMs: Strategies for Monitoring and Mitigation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.