Controller worker thread times out during config set
when we try to do a config:set on an app with 30-40 pods the controller worker thread is timing out because the operation is taking more than 20min(default timeout of the worker thread) keeping the cluster in an unstable state with pods of both releases.
tagging at rc1 since this is something we need to fix before we cut a stable release.
ping @helgi; has this been fixed recently?
No, and it won't be done in RC. This isn't a kubernetes problem, it's more the fact we are trying to execute any operation within the timeout of the gunicorn server, which we can't extend too much without causing contention of resources on the RC=1 controller setup. We'd need to move to background jobs to fix this
Oh, what has helped is doing deploys in batches, and by default rolling as many pods as available nodes but that only mitigates the issue in some scenarios
@kmala are you still able to reproduce this in your environment with workflow-dev? If it's a matter of waiting for all the pods to come up then perhaps we should rethink this deployment strategy eventually, since the old way of just destroying everything and starting everything worked for 99% of our use cases, including this one. Graceful rolling deploys are nice but if it's causing us to be in an unstable state any time we're dealing with a larger number of jobs then perhaps we should go back to square one.
This situation should be improved by the batching operation of the current controller, although probably not fixed definitively.
This issue was moved to teamhephy/controller#66