Logging/Events
Migrated issue, originally created by jvanasco (jvanasco)
This may be too much of a performance hit, if so -- it's an inappropriate idea.
I needed to create a logging/auditing trail of Dogpile interaction for a Pyramid application -- to track where there were cache hits/misses and what keys are being requested.
The only way I could handle this was to create a ProxyBackend that intercepted select calls, and handled the auditing.
That's not so bad itself -- I've been using this for months. However I couldn't find a good way do deal with tracking the region when I tried to release the code today. I thought about using a factory pattern to create a series "region customized" loggers, but that would limit how the logger is utilized (a user couldn't simply use a config file based setup, there would be a bit of manual work needed).
I started thinking about the SqlAlchemy events model, and wondered if that would be useful in dogpile -- or just overkill.
this is worth thinking about for sure. sqlalchemy's model is way too complicated though. can we supply the ProxyBackend included or as a recipe?
cc @jvanasco
The finalized code is available on
GitHub: https://github.com/jvanasco/pyramid_debugtoolbar_dogpile
PyPi:
https://pypi.org/project/pyramid_debugtoolbar_dogpile/
pip install pyramid_debugtoolbar_dogpile
I think I gave up on region tracking, though it might be doable now.
Putting this here in case it is helpful for anyone else. We have a library (oslo.cache) that sits in between the application caching data and dogpile.cache. We have a proxy that supplies additional information about the keys, values, and status [0].
[0] https://opendev.org/openstack/oslo.cache/src/branch/master/oslo_cache/core.py#L64-L99
We could easily pull the oslo.cache (well, similar) code into the dogpile.cache repository to optionally be layered in when debugging.
We had issues that could only be solved by tracking the whole set/get chains.
The code I released is very similar, however instead of going to the standard logging facility, the information is appended to a tracking object to be rendered on a debugging panel
The reason why I left this ticket open without suggesting a recipe is that IMHO a proper debugging system should be able to identify the region. The oslo and my pyramid_debugtoolbar_dogpile solutions only have access to the API Call and Key; I did some extra work to derive the Redis database (which should probably be in a try block , as it'll likely break on other backends).
both of our solutions are invoked with a general pattern to enable logging with a base class for 'global' actions:
pyramid_debugtoolbar_dogpile
from foo import DogpileLoggingProxy
cache_config = {}
...
cache_config['wrap'] = [DogpileLoggingProxy, ]
region = make_region()
region.configure_from_config(cache_config)
oslo
class _DebugProxy(proxy.ProxyBackend):
...
def configure_cache_region(conf, region):
....
if conf.cache.debug_cache_backend:
region.wrap(_DebugProxy)
BUT the base class has no idea what the region is. In a small project this may not be an issue, but I have projects with as many as 30+ regions. The reason why i thought about sqlalchemy's event model, is that a similar system in the core dogpile library would be able to trigger a debug including the region information without any (potentially) breaking changes.
I thought about extending dogpile to try and annotate the object with region info, but that code was looking a bit too likely to break changes. i could try that again, now that I understand a lot more of dogpile... but I fear the "right" debugging system may not be doable with Proxies