Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about Mirage, the first multi-level superoptimizer for tensor programs, in this 16-minute conference presentation from OSDI '25. Discover how researchers from Carnegie Mellon University, Peking University, Pennsylvania State University, Purdue University, and Weizmann Institute of Science developed a revolutionary approach to optimizing tensor computations across GPU compute hierarchies. Explore the key innovation of µGraphs, a uniform representation system that operates at kernel, thread block, and thread levels, enabling the discovery of novel optimizations that combine algebraic transformations, schedule transformations, and custom kernel generation. Understand the pruning technique based on abstraction that reduces search space while providing optimality guarantees, and examine the probabilistic equivalence verification procedure with strong theoretical foundations. See evaluation results demonstrating Mirage's significant performance improvements over existing approaches, even for widely-used and heavily-optimized deep neural networks, and learn how to access the open-source implementation available on GitHub.