Feature Request: coroutine/async support for memoize
I think it should be relatively easy to add async support for the memoize decorator, something equivalent to (but probably with some changes for compatibility reasons):
...
if not inspect.iscoroutinefunction(func):
# same as it is now:
@ft.wraps(func)
def wrapper(*args, **kwargs):
"Wrapper for callable to cache arguments and return values."
key = wrapper.__cache_key__(*args, **kwargs)
result = self.get(key, default=ENOVAL, retry=True)
if result is ENOVAL:
result = func(*args, **kwargs)
self.set(key, result, expire=expire, tag=tag, retry=True)
return result
else:
# just with async and await injected
@ft.wraps(func)
async def wrapper(*args, **kwargs):
"Wrapper for callable to cache arguments and return values."
key = wrapper.__cache_key__(*args, **kwargs)
result = self.get(key, default=ENOVAL, retry=True)
if result is ENOVAL:
result = await func(*args, **kwargs)
self.set(key, result, expire=expire, tag=tag, retry=True)
return result
This would go a long way toward simplifying the use of this decorator with coroutines (and might be a useful reference for building other coroutine-compatible decorators).
Clearly it needs to not cause syntax errors with older versions of python, so I think it would make sense to use the python 3.5-style syntax for awaiting coroutines (using the @coroutine decorator on a regular generator function), and to set things up so that the behavior is just disabled in older versions of python (e.g., replace the coroutine decorator with lambda x: x on an ImportError).
(I am willing to put in the effort to implement this and make it compatible with all supported versions of python, but before I do that I want to confirm a pull request implementing this would be of interest.)
This seems worthwhile. Could you create a pull request with test cases for the async scenario?
Seems like this issue (and its associated PR) has been idle for a while. Any chance this might be incorporated soon?
Probably not this month. I'd suggest just copying the code out of the PR into an inherited class in the mean time.
I noticed that the docs suggest that when using the cache in an async context, one should use run_in_executor like this:
loop = asyncio.get_running_loop()
future = loop.run_in_executor(None, cache.set, key, val)
result = await future
However, the code in the PR doesn't use run_in_executor. Is that intentional?
The docs predate the PR. The PR instead creates an async version of the decorator.
Thanks, Grant. I'm probably missing something here, but both of them occur in an async function. I thought the purpose of wrapping the call to the cache in run_in_executor was because it is blocking. Has anything in diskcache changed in that regard?
The currently published docs are correct. Disk Cache is still blocking for now. I have not yet merged #196 . If you use the changes there then the executor is not necessary.
If I understand correctly, the use of run_in_executor is still desirable since set is a blocking call.
There's an example in the Python docs that shows a similar case of calling a blocking function: https://docs.python.org/3/library/asyncio-eventloop.html#executing-code-in-thread-or-process-pools