Optimization in Machine Learning - From Convexity to Non-Convexity
Centre de recherches mathématiques - CRM via YouTube
Learn Backend Development Part-Time, Online
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the evolution of optimization algorithms in machine learning through this 58-minute conference talk that traces the journey from convex to non-convex optimization challenges. Examine how the field has transformed over the past fifteen years, moving from early convex optimization approaches with strong theoretical guarantees for linear models to the complex landscape of non-convex optimization that underlies neural networks and other sophisticated models. Discover key theoretical insights and empirical findings from both optimization domains, with particular attention to how convexity—whether explicit or implicit—shapes our understanding of optimization processes. Learn about gradient descent and its stochastic variants as fundamental tools in modern machine learning, while gaining insight into the theoretical guarantees available for different types of optimization problems. Understand the challenges posed by non-convex optimization in complex models and explore emerging research directions that bridge machine learning applications with broader optimization theory developments.
Syllabus
Francis Bach: Optimization in machine learning: from convexity to non-convexity
Taught by
Centre de recherches mathématiques - CRM