Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Hallucination Mitigation in RAG using LLM Steering and Qdrant

Qdrant - Vector Database & Search Engine via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to combine retrieval-augmented generation (RAG) with LLM steering techniques to create more reliable and grounded AI outputs in this 55-minute community presentation. Discover how RAG keeps your model anchored to real data by retrieving relevant information from documents or databases before response generation, while steering provides precise control over model behavior at inference time without requiring retraining. Explore practical strategies for addressing two critical challenges in generative AI: outdated knowledge bases and uncontrolled response generation. Master the integration of these complementary approaches to significantly reduce hallucinations and improve the reliability of your AI applications using Qdrant's vector database capabilities.

Syllabus

Hallucination Mitigation in RAG using LLM Steering and Qdrant

Taught by

Qdrant - Vector Database & Search Engine

Reviews

Start your review of Hallucination Mitigation in RAG using LLM Steering and Qdrant

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.