MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This conference talk from USENIX FAST '25 presents "Archer," an innovative framework for adaptive memory compression in mobile systems. Discover how the researchers from East China Normal University, University of Science and Technology of China, and Huazhong University of Science and Technology identified that approximately 25% of anonymous memory pages are highly correlated, leading to their breakthrough approach. Learn why traditional page-by-page compression limits system response speed and how Archer's flexible granularity compression technique overcomes these limitations. The presentation explains the redesigned LRU mechanism and adaptive memory compression region that integrates association-rule mining techniques into system design. Experimental results demonstrate impressive performance improvements: 1.55x faster app launching speed, 1.42x faster photographic speed, and 1.31x higher frame rates compared to state-of-the-art approaches. The 16-minute talk provides valuable insights for those interested in mobile device performance optimization and memory management techniques.
Syllabus
FAST '25 - Archer: Adaptive Memory Compression with Page-Association-Rule Awareness for High-Speed..
Taught by
USENIX