Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally in Neural Architecture Design
Erwin Schrödinger International Institute for Mathematics and Physics (ESI) via YouTube
Launch Your Cybersecurity Career in 6 Months
Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a technical talk that delves into a novel approach for optimizing neural network architectures during the training process. Learn how to identify and address expressivity bottlenecks in machine learning tasks through dynamic architecture adaptation, rather than relying on fixed architectures with pre-determined parameters. Discover a mathematical framework for detecting and quantifying these bottlenecks, enabling the strategic addition of neurons to improve network performance. Understand how this innovative method challenges the conventional wisdom of starting with large networks, instead demonstrating how to effectively grow networks from minimal initial configurations. The presentation, delivered at the Erwin Schrödinger International Institute's Thematic Programme on "Infinite-dimensional Geometry," offers valuable insights into more efficient and adaptable approaches to neural network development and optimization.
Syllabus
Manon Verbockhaven - Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Opt...
Taught by
Erwin Schrödinger International Institute for Mathematics and Physics (ESI)