Mixed-Precision Algorithms for Training Neural ODEs
Institute for Pure & Applied Mathematics (IPAM) via YouTube
Power BI Fundamentals - Create visualizations and dashboards from scratch
Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced mixed-precision computational strategies for training Neural Ordinary Differential Equations (Neural ODEs) in this 37-minute conference talk from IPAM's Scientific Machine Learning Workshop. Discover how standard mixed-precision approaches often fail with continuous-time models, causing instability and accuracy degradation, and learn about innovative solutions designed specifically for Neural ODEs. Examine the development and analysis of explicit mixed-precision ODE solvers paired with custom backpropagation schemes optimized for scientific machine learning applications. Understand how this hybrid approach utilizes low-precision arithmetic for neural network evaluations and intermediate state storage while preserving solution stability through dynamic adjoint scaling and high-precision accumulation techniques. See practical demonstrations of these methods applied to generative modeling tasks using continuous normalizing flows and conditional transport, showcasing how mixed-precision algorithms enable training of more complex continuous-time models on resource-constrained hardware while significantly reducing computational costs and memory requirements.
Syllabus
Lars Ruthotto - Mixed-Precision Algorithms for Training Neural ODEs - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)