Hallucination Mitigation in RAG using LLM Steering and Qdrant
Qdrant - Vector Database & Search Engine via YouTube
AI Engineer - Learn how to integrate AI into software applications
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to combine retrieval-augmented generation (RAG) with LLM steering techniques to create more reliable and grounded AI outputs in this 55-minute community presentation. Discover how RAG keeps your model anchored to real data by retrieving relevant information from documents or databases before response generation, while steering provides precise control over model behavior at inference time without requiring retraining. Explore practical strategies for addressing two critical challenges in generative AI: outdated knowledge bases and uncontrolled response generation. Master the integration of these complementary approaches to significantly reduce hallucinations and improve the reliability of your AI applications using Qdrant's vector database capabilities.
Syllabus
Hallucination Mitigation in RAG using LLM Steering and Qdrant
Taught by
Qdrant - Vector Database & Search Engine