Slow Convergence of Stochastic Optimization Algorithms Without Derivatives Is Avoidable
Institut des Hautes Etudes Scientifiques (IHES) via YouTube
2,000+ Free Courses with Certificates: Coding, AI, SQL, and More
Foundations of Data Visualization - Self Paced Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This lecture explores how the traditionally slow convergence rates of derivative-free optimization methods can be overcome. Discover how adaptive stochastic algorithms from the Evolution Strategy family can achieve asymptotic geometric convergence of mean square error, even on non-convex and non-quasi convex functions. The speaker explains the key differences between these approaches and finite-difference stochastic approximation (FDSA) algorithms like the Kiefer-Wolfowitz method, demonstrating how Markov chain stability analysis enables linear convergence guarantees. Learn about connections to the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), a highly effective stochastic algorithm for complex derivative-free optimization problems. The 51-minute presentation by Anne Auger from Télécom Paris was hosted by the Institut des Hautes Etudes Scientifiques (IHES).
Syllabus
Anne Auger - Slow Convergence of Stochastic Optimization Algorithms Without Derivatives Is Avoidable
Taught by
Institut des Hautes Etudes Scientifiques (IHES)