Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Evaluation Framework for Crafting Delightful Content from Messy Inputs

MLOps World: Machine Learning in Production via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore an evaluation framework for assessing the quality of Large Language Model (LLM) outputs in transforming diverse and messy textual inputs into refined content. This 32-minute conference talk by Shin Liang, Senior Machine Learning Engineer at Canva, delves into the challenges of objectively evaluating LLM outcomes in subjective and unstructured tasks. Learn about general evaluation metrics like relevance, fluency, and coherence, as well as specific metrics such as information preservation rate, accuracy of title/heading understanding, and key information extraction scores. Discover how this framework can be applied to similar LLM tasks, providing valuable insights for crafting high-quality content from complex inputs.

Syllabus

LLM Evaluation to Craft Delightful Content From Messy Inputs

Taught by

MLOps World: Machine Learning in Production

Reviews

Start your review of LLM Evaluation Framework for Crafting Delightful Content from Messy Inputs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.