Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to address miscalibration in generative models through constrained optimization techniques in this conference talk. Explore how statistics of sampling distributions often deviate from desired values in generative models and discover a framework that treats calibration as a constrained optimization problem seeking the closest model in Kullback-Leibler divergence while satisfying calibration constraints. Examine two practical surrogate objectives for fine-tuning: the relax loss that replaces constraints with miscalibration penalties, and the reward loss that converts calibration into a reward fine-tuning problem. Understand how these approaches significantly reduce calibration error across hundreds of simultaneous constraints in models with up to one billion parameters. See applications demonstrated across protein design, image generation, and language modeling domains, with insights into how proper calibration improves the reliability and trustworthiness of generative AI systems in scientific and practical applications.
Syllabus
Calibrating Generative Models to Distributional Constraints | Henry Smith & Nathaniel Diamant
Taught by
Valence Labs