Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive technical video that delves into RAGAS, a groundbreaking framework for evaluating Retrieval-Augmented Generation (RAG) systems. Learn about the challenges of RAG evaluation and discover how RAGAS addresses these through three key criteria: faithfulness, answer relevance, and context relevance. Understand the fundamentals of RAG systems, examine practical examples using the arXiv Dive Bot, and investigate the WikiEval dataset. Master the intricacies of each evaluation criterion while gaining hands-on experience with Oxen AI's dataset versioning capabilities. Connect with the community through Discord, access supplementary materials including the original research paper, and leverage provided datasets to enhance your understanding of RAG evaluation methodologies.
Syllabus
Intro to the arXiv Dive
What is RAGAS?
Quick Recap on What RAG is
Intro to the arXiv Dive Bot
Why Evaluating RAG is Hard
Intro to the Example Dataset
Enter RAGAS
The First Criteria: Faithfulness
The Second Criteria: Answer Relevance
The WikiEval Dataset
The Third Criteria: Context Relevance
Conclusion
Taught by
Oxen