Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Compressed and Parallelized Structured Tensor Algebra

ACM SIGPLAN via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch this 14-minute conference presentation from OOPSLA 2025 that introduces DASTAC, a novel framework for optimizing tensor algebra operations in data-intensive applications like machine learning and scientific computing. Learn how researchers from the University of Edinburgh and University of Cambridge developed a solution to bridge the gap between dense and sparse tensor algebra by automatically propagating high-level tensor structure information down to low-level code generation. Discover the framework's key techniques including automatic data layout compression, polyhedral analysis, and affine code generation that work together to reduce memory footprint while enabling significant performance improvements. Explore how DASTAC leverages MLIR for parallelization and polyhedral optimizations to achieve remarkable speedups of 0.16x to 44.83x for single-threaded cases and 1.37x to 243.78x for multi-threaded implementations, often outperforming hand-tuned expert implementations. Gain insights into the technical approaches behind sparse tensor optimization, compiler optimization techniques, and the Barvinok algorithm as applied to structured tensor algebra problems in modern computational workloads.

Syllabus

[OOPSLA'25] Compressed and Parallelized Structured Tensor Algebra

Taught by

ACM SIGPLAN

Reviews

Start your review of Compressed and Parallelized Structured Tensor Algebra

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.