Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Stopping Hallucinations From Hurting Your LLMs - Part 2

MLOps.community via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the critical issue of hallucinations in Large Language Models (LLMs) through this insightful 15-minute conference talk by Atindriyo Sanyal, founder and CTO of Galileo. Delve into the definition of hallucinations in modern LLM workflows and understand their impact on model outcomes and downstream consumers. Discover novel and efficient metrics and methods for early detection of hallucinations, aimed at preventing disinformation and poor or biased outcomes from LLMs. Learn how to increase trust in your LLM systems by addressing this crucial evaluation metric. Gain valuable insights from Sanyal's extensive experience in building large-scale ML platforms at companies like Uber and Apple.

Syllabus

Stopping Hallucinations From Hurting Your LLMs // Atindriyo Sanyal // LLMs in Prod Conference Part 2

Taught by

MLOps.community

Reviews

Start your review of Stopping Hallucinations From Hurting Your LLMs - Part 2

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.