AI Engineer - Learn how to integrate AI into software applications
Foundations for Product Management Success
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore strategies for monitoring and mitigating hallucinations in Large Language Models (LLMs) in this 39-minute conference talk by Wojtek Kuberski from NannyML. Gain insights into why and how to monitor LLMs in production environments, focusing on state-of-the-art solutions for hallucination detection. Delve into two main approaches: Uncertainty Quantification and LLM self-evaluation. Learn about leveraging token probabilities to estimate model response quality, including simple accuracy estimation and advanced methods for Semantic Uncertainty. Discover techniques for using LLMs to assess their own output quality, covering algorithms like SelfCheckGPT and LLM-eval. Develop an intuitive understanding of various LLM monitoring methods, their strengths and weaknesses, and acquire knowledge on setting up an effective LLM monitoring system.
Syllabus
Hallucination-Free LLMs: Strategies for Monitoring and Mitigation - Wojtek Kuberski, NannyML
Taught by
Linux Foundation