Start speaking a new language. It’s just 3 weeks away.
Earn Your Business Degree, Tuition-Free, 100% Online!
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This lecture from the Allen School Colloquia Series features PhD candidate Lisa Li from Stanford University discussing methods for controlling language models. Learn about three key approaches to making large language models more useful and reliable: Prefix-Tuning for efficient customization that updates only 0.1% of model parameters, a Frank-Wolfe-inspired algorithm for systematic red-teaming to discover diverse failure modes, and Diffusion-LM, a new generative text model designed with controllability as a core feature. The 59-minute talk explores how controlling language models is essential for both task-specific customization and rigorous behavior auditing. Li, who is advised by Percy Liang and Tatsunori Hashimoto, is supported by the Two Sigma PhD fellowship and Stanford Graduate Fellowship and has received an EMNLP Best Paper award for her research.
Syllabus
Controlling Language Models–Lisa Li (Stanford)
Taught by
Paul G. Allen School