Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Lies, Damned Lies, and Large Language Models - Measuring and Reducing Hallucinations

EuroPython Conference via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the challenges and solutions surrounding large language models' (LLMs) tendency to produce incorrect information or "hallucinate" in this 29-minute conference talk from EuroPython 2024. Delve into the main causes of hallucinations in LLMs and learn how to measure specific types of misinformation using the TruthfulQA dataset. Discover practical techniques for assessing hallucination rates and comparing different models using Python tools like Hugging Face's `datasets` and `transformers` packages, as well as the `langchain` package. Gain insights into recent initiatives aimed at reducing hallucinations, with a focus on retrieval augmented generation (RAG) and its potential to enhance the reliability and usability of LLMs across various contexts.

Syllabus

Lies, damned lies and large language models — Jodie Burchell

Taught by

EuroPython Conference

Reviews

Start your review of Lies, Damned Lies, and Large Language Models - Measuring and Reducing Hallucinations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.