Power BI Fundamentals - Create visualizations and dashboards from scratch
Introduction to Programming with Python
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch this 14-minute conference presentation from OOPSLA 2025 that introduces DASTAC, a novel framework for optimizing tensor algebra operations in data-intensive applications like machine learning and scientific computing. Learn how researchers from the University of Edinburgh and University of Cambridge developed a solution to bridge the gap between dense and sparse tensor algebra by automatically propagating high-level tensor structure information down to low-level code generation. Discover the framework's key techniques including automatic data layout compression, polyhedral analysis, and affine code generation that work together to reduce memory footprint while enabling significant performance improvements. Explore how DASTAC leverages MLIR for parallelization and polyhedral optimizations to achieve remarkable speedups of 0.16x to 44.83x for single-threaded cases and 1.37x to 243.78x for multi-threaded implementations, often outperforming hand-tuned expert implementations. Gain insights into the technical approaches behind sparse tensor optimization, compiler optimization techniques, and the Barvinok algorithm as applied to structured tensor algebra problems in modern computational workloads.
Syllabus
[OOPSLA'25] Compressed and Parallelized Structured Tensor Algebra
Taught by
ACM SIGPLAN