Fragile caching mechanism
Describe the bug Split caching system isn't thread / process safe. When the request ends, the cache remains set in the process.
This is due to Split using class instance variables to store the cache.
In a scenario in which you're using Puma with multiple processes handling requests, this is not ideal.
If one process removes or resets an experiment, the cache only gets reset in a single process. Not in all the processes having the cached values.
To Reproduce Using puma, Split with cache enabled, and 2 browser windows:
- Run two requests simultaneously in two windows that query the same experiment.
- Both should cache the same variant
- Now set the a different variant as the winner in the split console
- Reload both windows
- You should see one window still running the non-winner variant.
Suggestions
- Add an option to make split cache request scoped (same as rails CurrentAttributes). So the cache only lives throughout the lifecycle of a request.
- Add a way to use
CurrentAttributesfor the caching mechanism instead of the current class instance variable method.
Thanks for bringing this up! Yes, the local caching mechanism should work over a single request lifecycle. I think we can improve this by hooking the cache onto the Split::Helper instead.
The ideal solution would be to solve what's causing the performance impact. The number of Redis calls made per single request. But that needs a lot more work.
We solved the performance issue we were having not by using Split's built in cache but by using a request cache such as rails "CurrentAttributes". Maybe it's worthy to implement/use a request cache instead.