AI, Data Science & Cloud Certificates from Google, IBM & Meta
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch this 22-minute conference talk from POPL 2025 where researchers Evgenii Moiseenko, Matteo Meluzzi, Innokentii Meleshchenko, Ivan Kabashnyi, Anton Podkopaev, and Soham Chakraborty present their innovative approach to formal concurrency models. Learn about their proposed re-execution-based memory model (XMM) that challenges traditional beliefs about per-execution and multi-execution models. Discover how XMM reasons about individual executions while relating them through a re-execution principle, creating a framework that can be parameterized by existing axiomatic memory models. The speakers demonstrate their XC20 instantiation for the RC20 language model, which provides DRF guarantees, supports standard hardware mappings, compiler optimizations, and uniquely enables thread sequentialization optimization. The presentation also covers their sound model checker XMC and its evaluation on concurrency benchmarks. This ACM SIGPLAN talk includes supplementary materials and artifacts that have been evaluated as functional.
Syllabus
[POPL'25] Relaxed Memory Concurrency Re-executed
Taught by
ACM SIGPLAN