Global Convergence of Over-Parameterized Gradient EM for Learning Gaussian Mixtures
Paul G. Allen School via YouTube
Give the Gift That Unlocks Potential
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the theoretical foundations of machine learning optimization in this 47-minute workshop presentation examining the global convergence properties of over-parameterized gradient expectation-maximization algorithms for Gaussian mixture model learning. Delve into advanced mathematical concepts that bridge statistical learning theory and optimization, focusing on how over-parameterization affects the convergence behavior of gradient-based EM methods when applied to mixture models. Gain insights into the theoretical guarantees and conditions under which these algorithms achieve global convergence, understanding the interplay between model complexity, parameter initialization, and optimization dynamics. Learn about cutting-edge research in statistical machine learning that addresses fundamental questions about when and why certain learning algorithms succeed in finding optimal solutions despite the non-convex nature of the underlying optimization landscape.
Syllabus
IFDS Workshop–Global Convergence of Over-Parameterized gradient EM for Learning Gaussian Mixtures
Taught by
Paul G. Allen School