Do You Even Lift? Strengthening Compiler Security Guarantees Against Spectre Attacks
ACM SIGPLAN via YouTube
Get 20% off all career paths from fullstack to AI
AI, Data Science & Cloud Certificates from Google, IBM & Meta
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a 19-minute conference talk from POPL 2025 that presents a novel secure compilation framework for strengthening compiler security guarantees against Spectre attacks. Learn how researchers Xaver Fabian, Marco Patrignani, Marco Guarnieri, and Michael Backes developed a method to lift security guarantees from weaker speculative semantics to stronger ones without requiring new secure compilation proofs. The presentation details their comprehensive security analysis of nine different countermeasures against five classes of Spectre attacks, tested against a speculative semantics accounting for five different speculation mechanisms. Discover which countermeasures can be securely lifted to the strongest speculative semantics and which ones fail when supporting indirect jump speculation. This research represents the most thorough security analysis of Spectre countermeasures implemented in mainstream compilers to date.
Syllabus
[POPL'25] Do You Even Lift? Strengthening Compiler Security Guarantees Against Spectre Attacks
Taught by
ACM SIGPLAN