Global Convergence of Over-Parameterized Gradient EM for Learning Gaussian Mixtures
Paul G. Allen School via YouTube
Earn a Michigan Engineering AI Certificate — Stay Ahead of the AI Revolution
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the theoretical foundations of machine learning optimization in this 47-minute workshop presentation examining the global convergence properties of over-parameterized gradient expectation-maximization algorithms for Gaussian mixture model learning. Delve into advanced mathematical concepts that bridge statistical learning theory and optimization, focusing on how over-parameterization affects the convergence behavior of gradient-based EM methods when applied to mixture models. Gain insights into the theoretical guarantees and conditions under which these algorithms achieve global convergence, understanding the interplay between model complexity, parameter initialization, and optimization dynamics. Learn about cutting-edge research in statistical machine learning that addresses fundamental questions about when and why certain learning algorithms succeed in finding optimal solutions despite the non-convex nature of the underlying optimization landscape.
Syllabus
IFDS Workshop–Global Convergence of Over-Parameterized gradient EM for Learning Gaussian Mixtures
Taught by
Paul G. Allen School