GraphBLAS and Sparse Computation on GPUs - Limits and Progress
The Julia Programming Language via YouTube
35% Off Finance Skills That Get You Hired - Code CFI35
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the challenges and solutions in implementing GraphBLAS API for sparse linear algebra computations on GPUs in this lightning talk from JuliaCon Global 2025. Learn about the fundamental obstacles encountered when building sparse linear algebra frameworks on GPU architectures, including the high memory-to-computation ratio that makes sparse operations memory-bound rather than compute-bound, potentially underutilizing GPU computing capacity. Discover how load balancing issues arise from non-uniform distribution of non-zero elements in matrices, leading to uneven workloads across GPU threads and performance degradation in real-world applications with heterogeneous sparsity patterns. Understand the modularity challenge of supporting custom user-defined operators in the GraphBLAS API without creating unwieldy codebases with kernels for every operator combination. See how the JuliaGPU community's KernelAbstractions.jl enables a solution through modular, parametrized kernels that compile Just-In-Time to generate efficient, operator-specific low-level kernels, addressing the modularity problem while maintaining performance and code maintainability.
Syllabus
GraphBLAS and Sparse Computation on GPUs: Limits and Progress | Buttier | JuliaCon Global 2025
Taught by
The Julia Programming Language