Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore an innovative caching eviction algorithm called SIEVE in this conference talk from NSDI '24. Discover how SIEVE outperforms traditional algorithms like LRU in simplicity, efficiency, and scalability for web cache workloads. Learn about the algorithm's implementation across five production cache libraries and its impressive performance on 1559 cache traces from 7 sources. Understand how SIEVE achieves up to 63.2% lower miss ratio than ARC and surpasses 9 state-of-the-art algorithms on over 45% of the tested traces. Gain insights into SIEVE's superior scalability, which enables twice the throughput of an optimized 16-thread LRU implementation. Explore the potential of SIEVE as a cache primitive for building advanced eviction algorithms, offering a new perspective on efficient data serving in web caches.