Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about parallelism techniques in contemporary sparse matrix solvers through this comprehensive lecture by Aydin Buluc from Lawrence Berkeley Lab. Explore the fundamental challenges and solutions for managing parallelism in sparse linear algebra computations, examining how modern computational frameworks handle the irregular memory access patterns and load balancing issues inherent in sparse matrix operations. Discover advanced parallel algorithms and data structures designed specifically for sparse systems, including domain decomposition methods, iterative solvers, and direct factorization techniques. Understand the trade-offs between different parallelization strategies, from shared-memory approaches using OpenMP to distributed-memory implementations with MPI, and learn how to optimize performance across various hardware architectures including multicore processors and GPU accelerators. Gain insights into the latest research developments in parallel sparse computing, including adaptive load balancing, communication-avoiding algorithms, and hybrid programming models that combine multiple levels of parallelism to achieve scalable performance on large-scale scientific computing applications.
Syllabus
Parallelism in modern sparse solvers
Taught by
Simons Institute