MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch this 14-minute conference presentation from OOPSLA 2025 that introduces DASTAC, a novel framework for optimizing tensor algebra operations in data-intensive applications like machine learning and scientific computing. Learn how researchers from the University of Edinburgh and University of Cambridge developed a solution to bridge the gap between dense and sparse tensor algebra by automatically propagating high-level tensor structure information down to low-level code generation. Discover the framework's key techniques including automatic data layout compression, polyhedral analysis, and affine code generation that work together to reduce memory footprint while enabling significant performance improvements. Explore how DASTAC leverages MLIR for parallelization and polyhedral optimizations to achieve remarkable speedups of 0.16x to 44.83x for single-threaded cases and 1.37x to 243.78x for multi-threaded implementations, often outperforming hand-tuned expert implementations. Gain insights into the technical approaches behind sparse tensor optimization, compiler optimization techniques, and the Barvinok algorithm as applied to structured tensor algebra problems in modern computational workloads.
Syllabus
[OOPSLA'25] Compressed and Parallelized Structured Tensor Algebra
Taught by
ACM SIGPLAN