USENIX, the long-running OS/networking research group, also publishes a magazine called ;login:. Today the magazine’s editor — security consultant Rik Farrow — stopped by Slashdot to share some new research. rikfarrow writes:
Caching means using faster memory to store frequently requested data, and the most commonly used algorithm for determining which items to discard when the cache is full is Least Recently Used [or “LRU”]. These researchers have come up with a more efficient and scalable method that uses just a few lines of code to convert LRU to SIEVE.
Just like a sieve, it sifts through objects (using a pointer called a “hand”) to “filter out unpopular objects and retain the popular ones,” with popularity based on a single bit that tracks whether a cached object has been visited:
As the “hand” moves from the tail (the oldest object) to the head (the newest object), objects that have not been visited are evicted… During the subsequent rounds of sifting, if objects that survived previous rounds remain popular, they will stay in the cache. In such a case, since most old objects are not evicted, the eviction hand quickly moves past the old popular objects to the queue positions close to the head. This allows newly inserted objects to be quickly assessed and evicted, putting greater eviction pressure on unpopular items (such as “one-hit wonders”) than LRU-based eviction algorithms.
It’s an example of “lazy promotion and quick demotion”. Popular objects get retained with minimal effort, with quick demotion “critical because most objects are not reused before eviction.”
After 1559 traces (of 247,017 million requests to 14,852 million objects), they found SIEVE reduces the miss ratio (when needed data isn’t in the cache) by more than 42% on 10% of the traces with a mean of 21%, when compared to FIFO. (And it was also faster and more scalable than LRU.)
“SIEVE not only achieves better efficiency, higher throughput, and better scalability, but it is also very simple.”
Read more of this story at Slashdot.