Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Evaluating Quality and Improving LLM Products at Scale

MLOps.community via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore strategies for evaluating and enhancing large language model (LLM) products at scale in this 15-minute conference talk by Austin Bell at the AI in Production Conference. Learn how to measure the impact of prompt changes and pre-processing techniques on LLM output quality, enabling confident deployment of product improvements. Gain insights from Bell's experience as a Staff Software Engineer at Slack, focusing on developing text-based ML and Generative AI products. Discover methods to ensure consistent enhancement of generative products through effective evaluation and measurement techniques.

Syllabus

Evaluating Quality and Improving LLM Products at Scale // Austin Bell // AI in Production Conference

Taught by

MLOps.community

Reviews

Start your review of Evaluating Quality and Improving LLM Products at Scale

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.