Power BI Fundamentals - Create visualizations and dashboards from scratch
Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a 35-minute lecture on a novel stochastic Newton algorithm for distributed convex optimization presented by Brian Bullins from Purdue University at the Simons Institute. Delve into the proposed method for homogeneous distributed stochastic convex optimization, where machines calculate stochastic gradients and Hessian-vector products of the same population objective. Learn how this algorithm reduces communication rounds without compromising performance, particularly for quasi-self-concordant objectives like logistic regression. Examine the convergence guarantees and empirical evidence supporting the effectiveness of this approach in optimization and algorithm design.
Syllabus
A Stochastic Newton Algorithm for Distributed Convex Optimization
Taught by
Simons Institute