From LLM-as-a-Judge to Human-in-the-Loop - Rethinking Evaluation in RAG and Search
OpenSource Connections via YouTube
2,000+ Free Courses with Certificates: Coding, AI, SQL, and More
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore advanced evaluation techniques for Retrieval-Augmented Generation (RAG) systems and search technologies in this 46-minute conference talk from Haystack EU 2025. Learn how to move beyond traditional LLM-as-a-judge approaches by implementing RAGElo, an innovative ELO-style ranking framework that compares LLM outputs without requiring gold standard answers, bringing systematic structure to subjective judgments at scale. Discover how to integrate RAGElo with the Search Relevance Workbench in OpenSearch 3, a human-in-the-loop toolkit designed for deep analysis of search results, configuration comparisons, and identification of issues that standard metrics often overlook. Master the balance between automation and human intuition to build more reliable retrieval and generation systems, addressing the critical challenge of evaluating the evaluators themselves in complex RAG implementations where prompts, filters, and retrieval strategies create countless variations.
Syllabus
Haystack EU 2025: From LLM-as-a-Judge to Human-in-the-Loop: Rethinking Evaluation in RAG and Search
Taught by
OpenSource Connections