Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Global Convergence of Over-Parameterized Gradient EM for Learning Gaussian Mixtures

Paul G. Allen School via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the theoretical foundations of machine learning optimization in this 47-minute workshop presentation examining the global convergence properties of over-parameterized gradient expectation-maximization algorithms for Gaussian mixture model learning. Delve into advanced mathematical concepts that bridge statistical learning theory and optimization, focusing on how over-parameterization affects the convergence behavior of gradient-based EM methods when applied to mixture models. Gain insights into the theoretical guarantees and conditions under which these algorithms achieve global convergence, understanding the interplay between model complexity, parameter initialization, and optimization dynamics. Learn about cutting-edge research in statistical machine learning that addresses fundamental questions about when and why certain learning algorithms succeed in finding optimal solutions despite the non-convex nature of the underlying optimization landscape.

Syllabus

IFDS Workshop–Global Convergence of Over-Parameterized gradient EM for Learning Gaussian Mixtures

Taught by

Paul G. Allen School

Reviews

Start your review of Global Convergence of Over-Parameterized Gradient EM for Learning Gaussian Mixtures

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.