Hallucination Mitigation in RAG using LLM Steering and Qdrant
Qdrant - Vector Database & Search Engine via YouTube
Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
AI Adoption - Drive Business Value and Organizational Impact
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to combine retrieval-augmented generation (RAG) with LLM steering techniques to create more reliable and grounded AI outputs in this 55-minute community presentation. Discover how RAG keeps your model anchored to real data by retrieving relevant information from documents or databases before response generation, while steering provides precise control over model behavior at inference time without requiring retraining. Explore practical strategies for addressing two critical challenges in generative AI: outdated knowledge bases and uncontrolled response generation. Master the integration of these complementary approaches to significantly reduce hallucinations and improve the reliability of your AI applications using Qdrant's vector database capabilities.
Syllabus
Hallucination Mitigation in RAG using LLM Steering and Qdrant
Taught by
Qdrant - Vector Database & Search Engine