Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality

Databricks via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore innovative techniques for evaluating and improving fine-tuned Large Language Models (LLMs) in this 33-minute conference talk by Mirakl data scientists Loic Pauletto and Pierre Lourdelet. Delve into the challenges of information retrieval from E-commerce product data sheets and learn how Mirakl developed a solution using fine-tuned LLMs. Discover qualitative evaluation methods, including language model quality metrics and hallucination detection. Understand how to leverage MLflow for automating LLM evaluation and monitoring. Gain insights into iterative quality improvement strategies through prompt engineering and dataset refinement. Learn how these methods enable rapid iteration on prompts and fine-tuned models to achieve production-level trustworthiness. Access additional resources such as the LLM Compact Guide and Big Book of MLOps to further expand your knowledge in this field.

Syllabus

LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality

Taught by

Databricks

Reviews

Start your review of LLM Evaluation: Auditing Fine-Tuned LLMs for Guaranteed Output Quality

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.