Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Scaling Exponents Across Parameterizations and Optimizers

AutoML Seminars via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 40-minute AutoML Seminar presentation by Katie Everett exploring the theoretical and empirical aspects of model scaling across different parameterizations and optimizers. Learn about new perspectives on parameterization that challenge previous assumptions about parameter-data alignment, supported by extensive empirical research involving thousands of models. Discover insights into learning rate scaling prescriptions, hyperparameter transfer capabilities across various parameterizations, and the introduction of a novel per-layer learning rate approach for standard parameterization. Explore the critical role of the epsilon parameter in Adam optimization and understand Adam-atan2, a new numerically stable, scale-invariant optimizer that eliminates the epsilon hyperparameter. Gain valuable knowledge about model scaling techniques that can be applied to neural networks ranging from small to large widths, with parameters up to 26.8B.

Syllabus

Scaling Exponents Across Parameterizations and Optimizers

Taught by

AutoML Seminars

Reviews

Start your review of Scaling Exponents Across Parameterizations and Optimizers

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.