Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Hallucination-Free LLMs: Strategies for Monitoring and Mitigation

Linux Foundation via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore strategies for monitoring and mitigating hallucinations in Large Language Models (LLMs) in this 39-minute conference talk by Wojtek Kuberski from NannyML. Gain insights into why and how to monitor LLMs in production environments, focusing on state-of-the-art solutions for hallucination detection. Delve into two main approaches: Uncertainty Quantification and LLM self-evaluation. Learn about leveraging token probabilities to estimate model response quality, including simple accuracy estimation and advanced methods for Semantic Uncertainty. Discover techniques for using LLMs to assess their own output quality, covering algorithms like SelfCheckGPT and LLM-eval. Develop an intuitive understanding of various LLM monitoring methods, their strengths and weaknesses, and acquire knowledge on setting up an effective LLM monitoring system.

Syllabus

Hallucination-Free LLMs: Strategies for Monitoring and Mitigation - Wojtek Kuberski, NannyML

Taught by

Linux Foundation

Reviews

Start your review of Hallucination-Free LLMs: Strategies for Monitoring and Mitigation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.