Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

From LLM-as-a-Judge to Human-in-the-Loop - Rethinking Evaluation in RAG and Search

OpenSource Connections via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced evaluation techniques for Retrieval-Augmented Generation (RAG) systems and search technologies in this 46-minute conference talk from Haystack EU 2025. Learn how to move beyond traditional LLM-as-a-judge approaches by implementing RAGElo, an innovative ELO-style ranking framework that compares LLM outputs without requiring gold standard answers, bringing systematic structure to subjective judgments at scale. Discover how to integrate RAGElo with the Search Relevance Workbench in OpenSearch 3, a human-in-the-loop toolkit designed for deep analysis of search results, configuration comparisons, and identification of issues that standard metrics often overlook. Master the balance between automation and human intuition to build more reliable retrieval and generation systems, addressing the critical challenge of evaluating the evaluators themselves in complex RAG implementations where prompts, filters, and retrieval strategies create countless variations.

Syllabus

Haystack EU 2025: From LLM-as-a-Judge to Human-in-the-Loop: Rethinking Evaluation in RAG and Search

Taught by

OpenSource Connections

Reviews

Start your review of From LLM-as-a-Judge to Human-in-the-Loop - Rethinking Evaluation in RAG and Search

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.