AI, Data Science & Cloud Certificates from Google, IBM & Meta
Learn Backend Development Part-Time, Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This lecture from the Allen School Colloquia Series features PhD candidate Lisa Li from Stanford University discussing methods for controlling language models. Learn about three key approaches to making large language models more useful and reliable: Prefix-Tuning for efficient customization that updates only 0.1% of model parameters, a Frank-Wolfe-inspired algorithm for systematic red-teaming to discover diverse failure modes, and Diffusion-LM, a new generative text model designed with controllability as a core feature. The 59-minute talk explores how controlling language models is essential for both task-specific customization and rigorous behavior auditing. Li, who is advised by Percy Liang and Tatsunori Hashimoto, is supported by the Two Sigma PhD fellowship and Stanford Graduate Fellowship and has received an EMNLP Best Paper award for her research.
Syllabus
Controlling Language Models–Lisa Li (Stanford)
Taught by
Paul G. Allen School