Become an AI & ML Engineer with Cal Poly EPaCE — IBM-Certified Training
Learn AI, Data Science & Business — Earn Certificates That Get You Hired
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This talk by Jason Lee from Princeton University explores two key topics in deep learning theory: the emergence and scaling laws for stochastic gradient descent (SGD) learning, and how transformers learn compositional functions. Presented at the Simons Institute as part of their Deep Learning Theory program, the lecture delves into mathematical frameworks that explain how neural networks learn and scale, with particular focus on transformer architectures that have revolutionized natural language processing and other domains.
Syllabus
Emergence and scaling laws for SGD learning and Learning Compositional Functions with Transformers
Taught by
Simons Institute