Uncertainty, Prompting, and Chain-of-Thoughts in Large Language Models - Part 2
Earn a Michigan Engineering AI Certificate — Stay Ahead of the AI Revolution
Learn Generative AI, Prompt Engineering, and LLMs for Free
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about advanced concepts in AI uncertainty quantification and prompting techniques in this comprehensive lecture. Explore temperature scaling methods and Bayesian approaches to calibration before diving into free-text explanations and chain-of-thought prompting. Master in-context learning (ICL) principles and their reliable implementation, while understanding prompt-based fine-tuning strategies. Examine practical applications through case studies of FLAN-T5 and LLaMA Chat models. Gain insights into how these techniques improve AI model performance and reliability through detailed explanations and real-world examples.
Syllabus
Reminders
Recap of the uncertainty 1st part
Temperature scaling
Bayesian approaches to calibration
Free-text explanations / chain-of-thoughts intro
Prompt-based finetuning
In-context learning ICL
Reliable ICL
Chain-of-thought prompting
FLAN-T5
LLaMA Chat
Taught by
UofU Data Science