Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Mixed-Precision Algorithms for Training Neural ODEs

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced mixed-precision computational strategies for training Neural Ordinary Differential Equations (Neural ODEs) in this 37-minute conference talk from IPAM's Scientific Machine Learning Workshop. Discover how standard mixed-precision approaches often fail with continuous-time models, causing instability and accuracy degradation, and learn about innovative solutions designed specifically for Neural ODEs. Examine the development and analysis of explicit mixed-precision ODE solvers paired with custom backpropagation schemes optimized for scientific machine learning applications. Understand how this hybrid approach utilizes low-precision arithmetic for neural network evaluations and intermediate state storage while preserving solution stability through dynamic adjoint scaling and high-precision accumulation techniques. See practical demonstrations of these methods applied to generative modeling tasks using continuous normalizing flows and conditional transport, showcasing how mixed-precision algorithms enable training of more complex continuous-time models on resource-constrained hardware while significantly reducing computational costs and memory requirements.

Syllabus

Lars Ruthotto - Mixed-Precision Algorithms for Training Neural ODEs - IPAM at UCLA

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Mixed-Precision Algorithms for Training Neural ODEs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.