Optimization in Machine Learning - From Convexity to Non-Convexity
Centre de recherches mathématiques - CRM via YouTube
The Fastest Way to Become a Backend Developer Online
Foundations of Data Visualization - Self Paced Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the evolution of optimization algorithms in machine learning through this 58-minute conference talk that traces the journey from convex to non-convex optimization challenges. Examine how the field has transformed over the past fifteen years, moving from early convex optimization approaches with strong theoretical guarantees for linear models to the complex landscape of non-convex optimization that underlies neural networks and other sophisticated models. Discover key theoretical insights and empirical findings from both optimization domains, with particular attention to how convexity—whether explicit or implicit—shapes our understanding of optimization processes. Learn about gradient descent and its stochastic variants as fundamental tools in modern machine learning, while gaining insight into the theoretical guarantees available for different types of optimization problems. Understand the challenges posed by non-convex optimization in complex models and explore emerging research directions that bridge machine learning applications with broader optimization theory developments.
Syllabus
Francis Bach: Optimization in machine learning: from convexity to non-convexity
Taught by
Centre de recherches mathématiques - CRM