GraphBLAS and Sparse Computation on GPUs - Limits and Progress
The Julia Programming Language via YouTube
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the challenges and solutions in implementing GraphBLAS API for sparse linear algebra computations on GPUs in this lightning talk from JuliaCon Global 2025. Learn about the fundamental obstacles encountered when building sparse linear algebra frameworks on GPU architectures, including the high memory-to-computation ratio that makes sparse operations memory-bound rather than compute-bound, potentially underutilizing GPU computing capacity. Discover how load balancing issues arise from non-uniform distribution of non-zero elements in matrices, leading to uneven workloads across GPU threads and performance degradation in real-world applications with heterogeneous sparsity patterns. Understand the modularity challenge of supporting custom user-defined operators in the GraphBLAS API without creating unwieldy codebases with kernels for every operator combination. See how the JuliaGPU community's KernelAbstractions.jl enables a solution through modular, parametrized kernels that compile Just-In-Time to generate efficient, operator-specific low-level kernels, addressing the modularity problem while maintaining performance and code maintainability.
Syllabus
GraphBLAS and Sparse Computation on GPUs: Limits and Progress | Buttier | JuliaCon Global 2025
Taught by
The Julia Programming Language