Allow per-entry TTL
This is a low-priority suggestion but I figured I should offer it to you. I have a use-case in which each entry that I am caching may have a different TTL. Currently I set a cache-level TTL that is larger than any individual TTL and then I actually cache a struct which hold my cacheable thing and the individual-level TTL. Because of this I disable the background process to cleanup expired entries since they expire for me much earlier than they are perceived as expired by the cache. I then need to do some additional manually checking/refreshing based on my own stashed expiration.
entry, err := c.client.GetOrFetch(ctx, key, fetchFn)
if err != nil {
return nil, err
}
// If the entry has no expiration, return it.
if entry.ExpireAt == nil {
return &entry, nil
}
// Since sturdyc only supports a global TTL, we need to check the per-ID TTL
// to see if it has expired. If it has, we manually refresh.
// NOTE: sturdyc.Passthrough is not used here because we do not want to fall
// back on the cache if the upstream fails (as it would return the expired entry again).
if time.Now().After(*entry.ExpireAt) {
entry, err = c.refresh(ctx, key, do)
if err != nil {
return nil, err
}
}
// refresh refreshes the cache for the given key by calling the given fetch function.
// If the function returns an error, the key is deleted from the cache and the
// error is returned. This is treated as a cache miss. If the function returns
// an Entry, the Entry is stored in the cache and returned. This is treated as a
// refresh.
func (c *cache) refresh(ctx context.Context, key string, do Execution) (*User, error) {
entry, err := do.Call(ctx)
if err != nil {
// still want to purge the expired entry
c.client.Delete(key)
metrics.ResolversCacheMissTotal.Inc()
return nil, errors.Wrap(err, "do.Call")
}
c.client.Set(key, entry)
metrics.CacheRefreshTotal.Inc()
return user, nil
}
Describe the solution you'd like It would be awesome if we could set a distinct TTL per entry. Perhaps by default the cache would use the global TTL unless per-entry TTL is enabled via an option at cache instantiation. This has side-effects with the forced eviction logic however. Since all entries currently share the same TTL, the eviction partitioning operates on the age of the entry; i.e. you evict the oldest inserted entries. If you allow per-entry TTL the eviction logic will instead evict the earliest to expire entries. Thus you could insert entry A with a 1 year TTL. Then wait 6 months and insert entry B with a 1 month TTL. Then force an eviction. Entry B will be evicted first even though you just inserted it since it will expire sooner.
Instead you could add an additional timestamp field accessedAt here: https://github.com/viccon/sturdyc/blob/d368220e3c34c9b257ee613574eddf2232fee95c/shard.go#L13
This would allow you to set expiresAt on a per-entry basis. You can update the accessedAt when the entry is inserted, refreshed or returned from the cache. Then make the forced eviction logic operate on accessedAt. This will make the forced eviction behave like LRU (least recently accessed) cache eviction which is fairly standard.
Hi, I think this sounds like a cool feature. Previously, I've been setting up separate cache clients with different TTLs, but I can see how that approach isn't really feasible when the TTL of the objects you're storing varies a lot. However, I'm not sure how this could be added without affecting the existing API. The GetOrFetch and GetOrFetchBatch methods are not aware of the TTLs, and there is currently no way for the consumer to specify what the cache times should be
I, too, would love this feature. We have a "modulith" architecture, which allows running n services within a single container but can distribute them based on scale and load. The cache would be created once for the whole application, but each module (service) would have its own TTL requirements (maybe even more fine grain, based on resource type).
One option we have here would be to build multiple caches, one per resource type, to maintain multiple TTLs, but it would fit more cleanly into our architecture to configure a single cache once and provide it to each service module.
I haven't dug too deeply into the sturdyc's API, but would using a variadic operator on the GetOrFetch/GetOrFetchBatch functions avoid breaking it? It could be a good place to expand functionality in the future as well.
Thanks for writing this package; it's really nifty. I look forward to using it in a project soon!