Greg Dean
Greg Dean
Coming from here: https://github.com/DmitryTsepelev/graphql-ruby-fragment_cache#limitations Is there a proposed solution? We'd be willing to work on a PR, but not exactly sure where to start
`after_all_transitions` fires in 4.12 IMO, this is not desirable. The example using `log_status_change` is basically broken.
So did anything ever come from this. Interfacing with an array is less than ideal
We've run into the same limitation. What are people doing as a workaround? Are there any alternatives?
FWIW: we're seeing similar behavior. Isn't there a race condition. 1. `FragmentCache.cache_store.read(cache_key)` returns nil for an uncached value. 2. Someone else populates the cache key 3. `FragmentCache.cache_store.exist?(cache_key)` return true (event...
We've been doing some testing in production with the following ``` def value_from_cache return nil unless FragmentCache.cache_store.exist?(cache_key) FragmentCache.cache_store.read(cache_key).tap do |cached| return NIL_IN_CACHE if cached.nil? && FragmentCache.cache_store.exist?(cache_key) end end ``` It...
The workaround, isn't perfect. There is still a chance a race condition could cause a misread from the cache (specifically when an eviction occurs after the initial read, and then...
The cache returns the wrong value not an outdated value. I do not believe documentation alone is sufficient. In a moderately concurrent deployment, this is a serious issue.
My vote would be to either 1) Just take the hit on the extra read (as we've done in our workaround). This is a fairly pragmatic solution. Maybe start here...
@tsugitta I agree, this is the most correct fix, but it seemed like a big change 🤷