Power BI Fundamentals - Create visualizations and dashboards from scratch
MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore gradient optimization methods and their implicit bias properties in this 57-minute conference talk from the 2025 Mathematical and Scientific Foundations of Deep Learning Annual Meeting hosted by the Simons Foundation. Delve into the mathematical foundations of gradient-based optimization techniques commonly used in deep learning, examining how these methods exhibit implicit bias toward certain solutions. Learn about the theoretical benefits of early stopping in gradient optimization and understand how this regularization technique affects model performance and generalization. Gain insights into the intersection of optimization theory and deep learning practice, with particular focus on the mathematical principles that govern how gradient methods naturally bias learning toward specific types of solutions.
Syllabus
Peter Bartlett — Gradient Optimization Methods... (Sept. 26, 2025)
Taught by
Simons Foundation