Dynamical Phenomena in Nonlinear Learning - Lecture 3
International Centre for Theoretical Sciences via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the mathematical foundations of modern AI through this advanced lecture examining dynamical phenomena in nonlinear learning systems. Delivered by Stanford University's Andrea Montanari as part of the prestigious Infosys-ICTS Turing Lectures series, this presentation challenges classical theoretical wisdom about machine learning by investigating how successful AI models operate through non-convex optimization and complex architectures that often memorize training data. Delve into recent research addressing the conceptual challenges posed by learning in unexpected scenarios, moving beyond traditional convex optimization approaches and parsimonious model architectures. Examine the mathematical principles underlying overparametrized models and their capacity to learn effectively despite their complexity. This lecture forms part of a comprehensive three-part series covering the mathematics of large machine learning models, with this session specifically focusing on the dynamical aspects of nonlinear learning processes. Gain insights from cutting-edge research in statistical learning theory, optimization theory, and the mathematical analysis of deep learning systems. The presentation is delivered within the context of the "Data Science: Probabilistic and Optimization Methods II" program at the International Centre for Theoretical Sciences, providing a rigorous mathematical perspective on contemporary machine learning phenomena.
Syllabus
Date & Time: 11 August 2025, 16:30 to 17:30
Date and Time: Monday, 11 August 2025, 16:30 to 17:30
Date and Time: Tuesday, 12 August 2025, 11:15 to 12:30
Date and Time: Wednesday, 13 August 2025, 11:15 to 12:30
Taught by
International Centre for Theoretical Sciences