Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Benchmarking LLMs: Metrics, Challenges, and Best Practices for Evaluation

DevConf via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This conference talk from DevConf.IN 2025 explores the complex challenges of evaluating Large Language Models (LLMs) for enterprise adoption. Presented by Ravindra Patil, the 35-minute session delves into why traditional metrics like perplexity and BLEU scores fall short in assessing LLMs' real-world capabilities. Discover current benchmarking best practices, limitations of existing approaches, and emerging evaluation techniques essential for responsible AI implementation. Explore both qualitative and quantitative metrics across task-specific benchmarks (code generation, summarization) and user-centric evaluations (coherence, creativity, bias detection). Learn how specialized benchmarks test LLMs on ethical and explainability grounds. By the end of the talk, gain valuable insights on selecting LLMs that balance accuracy, efficiency, and fairness, plus understand the improvements in Granite 3.0 that enhance its performance as an LLM.

Syllabus

Benchmarking LLMs Metrics, Challenges, and Best Practices for Evaluation - DevConf.IN 2025

Taught by

DevConf

Reviews

Start your review of Benchmarking LLMs: Metrics, Challenges, and Best Practices for Evaluation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.