Learn Python with Generative AI - Self Paced Online
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore strategies for preventing hallucinations in Large Language Models (LLMs) in this 44-minute conference talk by Scott Mackie at LLMs in Prod Con 2. Dive into the concept of "LLM Hallucinations" and learn how to keep LLMs grounded and reliable for real-world applications. Follow along as Mackie demonstrates an "LLM-powered Support Center" implementation to illustrate hallucination-related problems. Discover how integrating a searchable knowledge base can enhance the trustworthiness of AI-generated responses. Examine the scalability of this approach and its potential impact on future AI-driven applications. Gain insights from Mackie's experience as a Staff Engineer at Mem and his work on scaling LLM pipeline systems for AI workspaces.
Syllabus
Combining LLMs with Knowledge Bases to Prevent Hallucinations // Scott Mackie // LLMs in Prod Con 2
Taught by
MLOps.community