Leveraging Sparsity to Accelerate Automatic Differentiation
The Julia Programming Language via YouTube
35% Off Finance Skills That Get You Hired - Code CFI35
Master Windows Internals - Kernel Programming, Debugging & Architecture
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how to dramatically accelerate automatic differentiation by leveraging the inherent sparsity of Jacobians and Hessians in this 30-minute conference talk from JuliaCon Local Paris 2025. Learn how these matrices, often considered computationally expensive to calculate, play vital roles in scientific computing and machine learning applications ranging from optimization to probabilistic modeling. Discover how building on top of DifferentiationInterface.jl enables bringing Automatic Sparse Differentiation to all major Julia AD backends, including ForwardDiff and Enzyme. Gain insights into practical techniques for exploiting sparsity patterns to significantly reduce computational costs while maintaining accuracy in gradient and Hessian computations. Understand the implementation details and performance benefits of sparse automatic differentiation methods within the Julia ecosystem, and see how these advances can accelerate your own scientific computing and machine learning workflows.
Syllabus
Leveraging Sparsity to Accelerate Automatic Differentiation | Hill | Paris 2025
Taught by
The Julia Programming Language