Power BI Fundamentals - Create visualizations and dashboards from scratch
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This lecture from the Simons Institute's Deep Learning Theory series features Pierfrancesco Urbani (CNRS) discussing the complex relationship between generalization and overfitting in two-layer neural networks. Explore how statistical physics techniques, particularly dynamical mean field theory, can be applied to study training dynamics in large, overparametrized neural networks. Understand key theoretical machine learning concepts including implicit bias hypothesis, benign overfitting, and feature learning regimes where neural networks identify latent data structures. The presentation details joint research with Andrea Montanari that provides insights into how generalization properties emerge in overparametrized models, addressing a central problem in theoretical machine learning.
Syllabus
Generalization and overfitting in two-layer neural networks
Taught by
Simons Institute