Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced optimization techniques and programming abstractions for sparse machine learning systems in this 19-minute conference presentation from the Sparse 2025 workshop. Delve into the challenges of efficiently performing training and inference on deep neural networks that operate on various forms of sparse data, including graphs and regularly sparse attention matrices. Learn how to approach optimizations holistically rather than focusing on isolated sparse primitives, considering the interactions between sparse, dense, and temporal components in machine learning models. Discover specific optimization strategies and abstractions developed for static and temporal graph neural networks, sparse convolutional networks, and sparse attention mechanisms that achieve superior performance while enhancing programmer productivity. Understand why traditional optimization approaches fall short for complex sparse machine learning systems and how comprehensive solutions can address the growing computational demands of large-scale models operating on sparse data structures.
Syllabus
[Sparse'25] Optimizations and abstractions for sparse machine learning
Taught by
ACM SIGPLAN