Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This lecture by Matus Telgarsky from NYU explores how optimization methods can be derived and analyzed through a duality framework. Discover how this approach not only potentially leads to faster convergence rates than non-duality proofs but also reveals different similarities when viewed from the dual perspective. The presentation focuses primarily on linear separability scenarios while using deep networks from approximately a decade ago as motivational examples. Learn about this collaborative research conducted with Ziwei Ji, Danny Son, and Zihan Wang as part of the Simons Institute's Deep Learning Theory program.