Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally in Neural Architecture Design

Erwin Schrödinger International Institute for Mathematics and Physics (ESI) via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a technical talk that delves into a novel approach for optimizing neural network architectures during the training process. Learn how to identify and address expressivity bottlenecks in machine learning tasks through dynamic architecture adaptation, rather than relying on fixed architectures with pre-determined parameters. Discover a mathematical framework for detecting and quantifying these bottlenecks, enabling the strategic addition of neurons to improve network performance. Understand how this innovative method challenges the conventional wisdom of starting with large networks, instead demonstrating how to effectively grow networks from minimal initial configurations. The presentation, delivered at the Erwin Schrödinger International Institute's Thematic Programme on "Infinite-dimensional Geometry," offers valuable insights into more efficient and adaptable approaches to neural network development and optimization.

Syllabus

Manon Verbockhaven - Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Opt...

Taught by

Erwin Schrödinger International Institute for Mathematics and Physics (ESI)

Reviews

Start your review of Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally in Neural Architecture Design

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.