Building and Evaluating Prompts on Production Grade Datasets for Conversational AI
Toronto Machine Learning Series (TMLS) via YouTube
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to effectively construct and evaluate prompts for production-level Large Language Model (LLM) implementations in this 29-minute conference talk from the Toronto Machine Learning Series. Explore methodologies and techniques for creating production-style datasets specifically designed for LLM tasks, with a focus on conversational AI applications. Discover practical insights from Voiceflow's Lead of Agent Performance & ML Platform Bhuvana Adur Kannan and Machine Learning Engineer Yoyo Yang as they share their experiences in developing and deploying prompt-based features. Master the challenges of prompt engineering in production environments while gaining valuable lessons learned from real-world implementations.
Syllabus
Building and Evaluating Prompts on Production Grade Datasets
Taught by
Toronto Machine Learning Series (TMLS)