afaik, existing LruBlockCache is not exactly LRU cache
It utilizes more advanced algorithm to avoid cache trashing during scan ops by
dividing cache into three sub-caches (for newly added blocks, for promoted blocks and for in-memory blocks)
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail:
[email protected]________________________________________
From: Nicolas Spiegelberg [
[email protected]]
Sent: Tuesday, February 21, 2012 9:01 AM
To:
[email protected]Subject: Re: LIRS cache as an alternative to LRU cache
We had the author of LIRS come to Facebook last year to talk about his
algorithm and general benefits. At the time, we were looking at
increasing block cache efficiency. The general consensus was that it
wasn't an exponential perf gain, so we could get bigger wins from
cache-on-write intelligence, in-memory data compression techniques, and
adding stats so we could understand how to tune the existing LRU
algorithm. I still think that these 3 goals are more important at the
moment because LIRS would be a decent bit of code and only incremental
gain. It's probably something to revisit in a year or two.
Nicolas
On 2/21/12 8:26 AM, "[email protected]" wrote:Hi,
Shall we experiment with low inter-reference recency set replacement
policy to see if block cache becomes more effective ?
Cheers
Confidentiality Notice: The information contained in this message, including any attachments hereto, may be confidential and is intended to be read only by the individual or entity to whom this message is addressed. If the reader of this message is not the intended recipient or an agent or designee of the intended recipient, please note that any review, use, disclosure or distribution of this message or its attachments, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or
[email protected] and delete or destroy any copy of this message and its attachments.