Are Global and Local Minima of Shallow Neural Networks Fundamentally Different?
Paul G. Allen School via YouTube
AI Engineer - Learn how to integrate AI into software applications
AI Adoption - Drive Business Value and Organizational Impact
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the fundamental differences between global and local minima in shallow neural networks through this 39-minute workshop presentation by Rahul Parhi from UC San Diego. Delve into the theoretical foundations of neural network optimization landscapes and examine whether global and local minima exhibit fundamentally distinct characteristics in shallow architectures. Investigate the mathematical properties that distinguish these critical points and their implications for training algorithms and convergence behavior. Analyze the geometric structure of loss surfaces in shallow networks and understand how the nature of these minima affects the optimization process. Gain insights into current research findings that challenge or support conventional understanding of neural network optimization theory, with particular focus on the practical consequences for machine learning practitioners working with shallow network architectures.
Syllabus
IFDS Workshop–Are Global and Local Minima of Shallow Neural Networks Fundamentally Different?
Taught by
Paul G. Allen School