AI Adoption - Drive Business Value and Organizational Impact
Launch Your Cybersecurity Career in 6 Months
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive evaluation of mixed-precision tuning tools for floating-point programs in this 15-minute conference presentation from OOPSLA 2025. Discover how researchers Anastasia Isychev and Debasmita Lohar challenge conventional wisdom about the trade-offs between sound static analysis and dynamic sampling approaches in numerical optimization. Learn about their quantitative comparison measuring performance gains, optimization potential, and soundness guarantees—what they term "the cost of soundness." Examine their findings from experiments on the FPBench benchmark suite, which reveal that sound tools enhanced with regime inference can match or outperform dynamic optimizers for small straight-line numerical programs while providing formal correctness guarantees. Understand the limitations of standalone sound tools when accuracy constraints are tight, and discover how dynamic tools consistently exceed maximum allowed error by up to 9 orders of magnitude despite their effectiveness across different targets. Gain insights into floating-point optimization strategies, mixed-precision tuning methodologies, and the practical implications of choosing between conservative sound methods and aggressive dynamic approaches for numerical code optimization on resource-limited hardware.
Syllabus
[OOPSLA'25] Cost of Soundness in Mixed-Precision Tuning
Taught by
ACM SIGPLAN