Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

GraphBLAS and Sparse Computation on GPUs - Limits and Progress

The Julia Programming Language via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the challenges and solutions in implementing GraphBLAS API for sparse linear algebra computations on GPUs in this lightning talk from JuliaCon Global 2025. Learn about the fundamental obstacles encountered when building sparse linear algebra frameworks on GPU architectures, including the high memory-to-computation ratio that makes sparse operations memory-bound rather than compute-bound, potentially underutilizing GPU computing capacity. Discover how load balancing issues arise from non-uniform distribution of non-zero elements in matrices, leading to uneven workloads across GPU threads and performance degradation in real-world applications with heterogeneous sparsity patterns. Understand the modularity challenge of supporting custom user-defined operators in the GraphBLAS API without creating unwieldy codebases with kernels for every operator combination. See how the JuliaGPU community's KernelAbstractions.jl enables a solution through modular, parametrized kernels that compile Just-In-Time to generate efficient, operator-specific low-level kernels, addressing the modularity problem while maintaining performance and code maintainability.

Syllabus

GraphBLAS and Sparse Computation on GPUs: Limits and Progress | Buttier | JuliaCon Global 2025

Taught by

The Julia Programming Language

Reviews

Start your review of GraphBLAS and Sparse Computation on GPUs - Limits and Progress

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.