Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
35% Off Finance Skills That Get You Hired - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a novel deep learning program optimization strategy in this 17-minute conference talk from OSDI '25. Discover how Bayesian code diffusion accelerates the auto-tuning process of deep learning compilers by applying Bayesian framework concepts of prior and posterior distributions to program optimization contexts. Explore how this approach efficiently searches for optimal program code in reduced search spaces through iterative code diffusion, and understand the implementation of pre-training and fine-tuning techniques for cost models that improve both predictive accuracy and training efficiency. Examine the practical implementation in Ansor and comprehensive performance evaluations across diverse deep learning models on both CPU and GPU platforms. Analyze how this method addresses the challenges existing approaches face in reliably generating high-performing deep learning programs across various configurations, including different model architectures and hardware platforms. Review the significant performance improvements achieved, including up to 3.31× optimization speedup in end-to-end compilation time while maintaining equivalent program execution latency across multiple setups, demonstrating efficient and principled optimization across a wide range of deep learning models, operators, and hardware configurations.
Syllabus
OSDI '25 - Bayesian Code Diffusion for Efficient Automatic Deep Learning Program Optimization
Taught by
USENIX