Mixed-Precision Algorithms for Training Neural ODEs
Institute for Pure & Applied Mathematics (IPAM) via YouTube
AI Engineer - Learn how to integrate AI into software applications
Learn Backend Development Part-Time, Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore advanced mixed-precision computational strategies for training Neural Ordinary Differential Equations (Neural ODEs) in this 37-minute conference talk from IPAM's Scientific Machine Learning Workshop. Discover how standard mixed-precision approaches often fail with continuous-time models, causing instability and accuracy degradation, and learn about innovative solutions designed specifically for Neural ODEs. Examine the development and analysis of explicit mixed-precision ODE solvers paired with custom backpropagation schemes optimized for scientific machine learning applications. Understand how this hybrid approach utilizes low-precision arithmetic for neural network evaluations and intermediate state storage while preserving solution stability through dynamic adjoint scaling and high-precision accumulation techniques. See practical demonstrations of these methods applied to generative modeling tasks using continuous normalizing flows and conditional transport, showcasing how mixed-precision algorithms enable training of more complex continuous-time models on resource-constrained hardware while significantly reducing computational costs and memory requirements.
Syllabus
Lars Ruthotto - Mixed-Precision Algorithms for Training Neural ODEs - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)