garnet icon indicating copy to clipboard operation
garnet copied to clipboard

Supported key eviction policies

Open KrzysztofBranicki opened this issue 1 year ago • 3 comments

Hi, I can't find information about available eviction policies used when maxmemory is reached. Is LRU supported?

KrzysztofBranicki avatar Mar 27 '24 13:03 KrzysztofBranicki

Garnet supports a loose LRU policy. Entries get pushed to the immutable region as more and more fresh entries are added to the mutable region. Eventually, they get pushed out from immutable region to disk (if configured) or dropped. Memory size configuration helps control the size of the mutable and immutable regions.

Expiry time could also be configured on entries using specific command options that support them.

yrajas avatar Mar 27 '24 20:03 yrajas

Not sure if I understood, so assuming maxmemory is reached and no disk storage configured the first keys that will be dropped will be the ones which are oldest in terms of creation timestamp or oldest in terms of last time they were accessed/retrieved?

KrzysztofBranicki avatar Mar 28 '24 08:03 KrzysztofBranicki

Here are some details on Garnet's caching policies:

  • The data is stored in a hybrid log with 90% of the log marked as mutable (90% is the default, overridable using --mutable-percent).
  • When you add a new record R to the cache, it starts at the tail (in the mutable region). Updates to R in the mutable region happen "in place".
  • As other new records are added to the tail, the record R "travels" through the log, until it eventually reaches the immutable region in memory.
  • Updates in the immutable region will move the record back to the tail (simulating LRU with second chance, for writes).
  • Reads in the immutable region, however, will not by default move the record back to tail, so it will behave as FIFO for a read-only record in the cache.
  • If you want LRU with second chance with respect to reads, you can set the flag --copy-reads-to-tail. This will cause reads in the immutable region to get copied to the tail, thereby retaining the read-hot records in the cache.

Experiments have shown that our strategies are extremely good at retaining the hot data in the cache, and provide hit rates close to LRU, without the overheads of the same.

badrishc avatar Apr 01 '24 02:04 badrishc

@badrishc thanks for details answer.

KrzysztofBranicki avatar Apr 02 '24 11:04 KrzysztofBranicki