Google AI Professional Certificate - Learn AI Skills That Get You Hired
You’re only 3 weeks away from a new language
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore gradient optimization methods and their implicit bias properties in this 57-minute conference talk from the 2025 Mathematical and Scientific Foundations of Deep Learning Annual Meeting hosted by the Simons Foundation. Delve into the mathematical foundations of gradient-based optimization techniques commonly used in deep learning, examining how these methods exhibit implicit bias toward certain solutions. Learn about the theoretical benefits of early stopping in gradient optimization and understand how this regularization technique affects model performance and generalization. Gain insights into the intersection of optimization theory and deep learning practice, with particular focus on the mathematical principles that govern how gradient methods naturally bias learning toward specific types of solutions.
Syllabus
Peter Bartlett — Gradient Optimization Methods... (Sept. 26, 2025)
Taught by
Simons Foundation