Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and Stochastic

Simons Institute via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore gradient-based optimization techniques in this lecture by Michael Jordan from UC Berkeley. Delve into accelerated, distributed, asynchronous, and stochastic methods for machine learning optimization. Learn about variational approaches, covariant operators, discretization, gradient flow, Hamiltonian formulation, and gradient descent structures. Discover strategies for avoiding saddle points, understand the role of differential geometry in nonconvex optimization, and gain insights into stochastic gradient control. Enhance your understanding of computational challenges in machine learning through this comprehensive exploration of advanced optimization concepts.

Syllabus

Intro
What is variational
Gradientbased optimization
Covariant operator
Discretization
Summary
Gradient Flow
Hamiltonian Formulation
Gradient Descent
Diffusions
Assumptions
Gradient Descent Structure
Avoiding Saddle Points
Differential geometry
Nonconvex optimization
Stochastic gradient control

Taught by

Simons Institute

Reviews

5.0 rating, based on 1 Class Central review

Start your review of On Gradient-Based Optimization - Accelerated, Distributed, Asynchronous and Stochastic

  • Profile image for Aidilia Fitri
    Aidilia Fitri
    This lecture provides a comprehensive overview of modern gradient-based optimization techniques pivotal in large-scale machine learning. It adeptly covers accelerated methods that speed up convergence, distributed frameworks enabling scalability acr…

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.