Cache for QnA Maker queries
Is your feature request related to a problem? Please describe.
I notice time and again that QnAmaker tends to be... a bit slow for my tastes, especially in the small scale that I work with which doesn't justify a bigger setup. While reading about functools's lru_cache decorator, I got the idea of adding a cache to the QnAmaker queries.
Describe the solution you'd like
Some way to optionally activate caching when instantiating a QnAMaker object (or QnAMakerDialog), and have it work transparently. A nice option would be to be able to use an external cache (redis for example), so it persist beyond a particular instance of the bot.
Describe alternatives you've considered
Memorization is not easily implementable since QnAMaker.get_answers() takes a TurnContext object.
Additional context I've checked the sources and couldn't find anything of this nature.
This feature should not be turned on by default, since it would break any logging/metrics/statistics that assume no caching on the bot, and caching in general can hide other issues.
The QnAMaker class accepts an aiohttp.ClientSession parameter, which can be used to configure an HTTP/HTTPS proxy (see docs here).
It appears to support both the HTTP_PROXY/HTTPS_PROXY environment variables as well as explicit proxy configuration. You could configure something like Apache Varnish to achieve this transparently.
@tracyboehrer : Is there a milestone we can slot this to?
@joshgummersall I'm not sure if that helps with what I want.
My point was that I wanted to be able to cache the answers for QnA queties from the bot to QnAmaker, in particular stuff that is sent because of prompts (which always give the same options).
I could hardcode those in the bot, or put them in a json file loaded at runtime or whatever, but that misses the whole point of using QnAmaker.
For the scale I'm going for I'd much rather have the functionality QnAmaker provides built into the bot itself, but I don't think that option is in the table at this point.
Ah - that makes sense, I can see why you would not want to re-prompt for things. I would defer to @tracyboehrer and/or @axelsrz on the next steps.