[ENHANCE] Coalesce Rollouts When Both Deployment and ConfigMap Are Updated
Is your feature request related to a problem? Please describe. We’ve noticed that when both a Deployment and its associated ConfigMap are updated in the same change, Reloader triggers two separate rollouts. This causes unnecessary redeployments and slightly increases downtime and resource usage.
Describe the solution you'd like
When both resources are updated as part of the same event (in our case helm upgrade), Reloader should detect this and coalesce them into a single rollout.
Describe alternatives you've considered Pausing updates to avoid simultaneous changes, which isn’t always practical, since the person deploying needs to be aware that both resources are updated, and should add the pause annotation in this case.
Additional context We're using the latest Reloader and Helm versions.
Are both triggered from the Reloader?
One theory is that one is coming from Kubernetes when changing the Deployment and one is coming from Reloader when detecting the updated ConfigMap.
Are both triggered from the Reloader?
One theory is that one is coming from Kubernetes when changing the Deployment and one is coming from Reloader when detecting the updated ConfigMap.
I'm talking about cases where both the Deployment and ConfigMap resources are updated in the helm chart. The ConfigMap will be updated first (see this reference), which will cause the Deployment to be rolled, and then the Deployment will be updated and it will be rolled once again.
Okay, I understand what is happening, thanks for added context.
I think this feature would be tricky to implement for very little return tbh.
The problem comes from the fact that we don't know that a manifest is being deployed as part of a helm chart or what other templates will be applied in succession from within the cluster. It could potentially be solved by looking at annotations set by helm or similar, but then what about those who deploy a helm-chart via ArgoCD, or Flux, or custom tool XYZ, etc, we would have to support those as well. Each external method of deploying also comes with considerable tech-debt in the form of maintenance.
I think that the issue of the Deployment becoming unavailable is better solved using a PodDisruptionBudget to ensure there are always enough pods alive to run the application.
That said, we are open for more discussions and ideas for how this could be done in a more generic fashion.
This is interesting input. I tend to agree that given the current architecture of the operator, it's tricky to implement and won't probably require major re-designs. However, I think this use case is very common, and could potentially be problematic for many users. I thought it would still be relevant to raise this up for a discussion and potentially think of a mitigation for this.
On Tue, Aug 19, 2025, 10:36 Felix @.***> wrote:
Felix-Stakater left a comment (stakater/Reloader#992) https://github.com/stakater/Reloader/issues/992#issuecomment-3199581615
Okay, I understand what is happening, thanks for added context.
I think this feature would be tricky to implement for very little return tbh.
The problem comes from the fact that we don't know that a manifest is being deployed as part of a helm chart or what other templates will be applied in succession from within the cluster. It could potentially be solved by looking at annotations set by helm or similar, but then what about those who deploy a helm-chart via ArgoCD, or Flux, or custom tool XYZ, etc, we would have to support those as well. Each external method of deploying also comes with considerable tech-debt in the form of maintenance.
I think that the issue of the Deployment becoming unavailable is better solved using a PodDisruptionBudget https://kubernetes.io/docs/tasks/run-application/configure-pdb/ to ensure there are always enough pods alive to run the application.
That said, we are open for more discussions and ideas for how this could be done in a more generic fashion.
— Reply to this email directly, view it on GitHub https://github.com/stakater/Reloader/issues/992#issuecomment-3199581615, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGRU7HQXW3D7DUJPWVBNJL3OLHWPAVCNFSM6AAAAACDTITOEKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTCOJZGU4DCNRRGU . You are receiving this because you authored the thread.Message ID: @.***>