Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Combining LLMs with Knowledge Bases to Prevent Hallucinations

MLOps.community via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore strategies for preventing hallucinations in Large Language Models (LLMs) in this 44-minute conference talk by Scott Mackie at LLMs in Prod Con 2. Dive into the concept of "LLM Hallucinations" and learn how to keep LLMs grounded and reliable for real-world applications. Follow along as Mackie demonstrates an "LLM-powered Support Center" implementation to illustrate hallucination-related problems. Discover how integrating a searchable knowledge base can enhance the trustworthiness of AI-generated responses. Examine the scalability of this approach and its potential impact on future AI-driven applications. Gain insights from Mackie's experience as a Staff Engineer at Mem and his work on scaling LLM pipeline systems for AI workspaces.

Syllabus

Combining LLMs with Knowledge Bases to Prevent Hallucinations // Scott Mackie // LLMs in Prod Con 2

Taught by

MLOps.community

Reviews

Start your review of Combining LLMs with Knowledge Bases to Prevent Hallucinations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.