Iterative Regularization of the Deep Inverse Prior via Inertial Gradient Flow
Erwin Schrödinger International Institute for Mathematics and Physics (ESI) via YouTube
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore a 28-minute conference talk delivered by Jalal Fadili at the Workshop on "One World Optimization Seminar in Vienna" held at the Erwin Schrödinger International Institute for Mathematics and Physics (ESI) in June 2024. Delve into the theoretical aspects of using neural networks to solve inverse problems, focusing on the convergence and recovery guarantees for a class of neural networks optimized using inertial gradient flow. Examine the role of overparametrization in neural network training and its application to inverse problems. Discover how this research bridges the gap between data-driven methods and theoretical guarantees in the context of Deep Inverse Prior networks with smooth activation functions. Gain insights into the interplay between optimization dynamics and neural networks in solving inverse problems, providing a foundation for further theoretical understanding in this field.
Syllabus
Jalal Fadili - Iterative Regularization of the Deep Inverse Prior via (Inertial) Gradient Flow
Taught by
Erwin Schrödinger International Institute for Mathematics and Physics (ESI)