[BUG] Reloader doesn't pick up the changes when secret changes
Describe the bug
When Deployment annotated with secret.reloader.stakater.com/reload: "test-secret", reloader doesn't pick up secret correctly.
To Reproduce We use kustomize, but simple version will be:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: reloader
namespace: reloader
spec:
chart:
spec:
chart: reloader
sourceRef:
kind: HelmRepository
name: reloader-repo
namespace: cluster-config
interval: 5m
install:
remediation:
retries: 5
upgrade:
force: true
cleanupOnFail: true
remediation:
retries: 3
strategy: uninstall
values:
fullnameOverride: reloader
reloader:
watchGlobally: true
autoReloadAll: true
reloadStrategy: annotations
logLevel: trace
rbac:
enabled: true
serviceAccount:
create: true
name: reloader-svc-account
kubectl create secret generic test-secret --from-literal=testkey=testvalue -n test
kubectl annotate secret test-secret secret.reloader.stakater.com/reload=test-secret -n test
kubectl edit secret test-secret -n test
Expected behavior
Reloader monitors all the namespaces and picks up all annotated deployments. After secret gets updated, rotated or anyhow changed, it triggers reload of the annotated deployment.
Screenshots Not applicable.
Environment
- Operator Version: Helm upgrade succeeded for release reloader/reloader.v3 with chart [email protected]
- Kubernetes/OpenShift Version:
kubectl version
Client Version: v1.34.1
Kustomize Version: v5.7.1
Server Version: v1.30.9
Additional context No matter how I changed annotations and Helm chart, unless there is access issues, I see the below:
kubectl logs pod/reloader-76d797b5b7-w4hlh -n reloader
Defaulted container "reloader" out of: reloader, install-oneagent (init)
time="2025-10-27T22:50:38Z" level=info msg="Environment: Kubernetes"
time="2025-10-27T22:50:38Z" level=info msg="Starting Reloader"
time="2025-10-27T22:50:38Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2025-10-27T22:50:38Z" level=info msg="created controller for: configMaps"
time="2025-10-27T22:50:38Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2025-10-27T22:50:38Z" level=info msg="created controller for: secrets"
time="2025-10-27T22:50:38Z" level=info msg="Starting Controller to watch resource type: secrets"
time="2025-10-27T22:50:38Z" level=info msg="Meta info configmap already exists, updating it"
any suggestions on how to debug it and if it is a bug or are we misusing the tool?
We're experiencing the same issue. Debug logging sees the change event but doesn't see that content actually changed (race condition or compare logic got broken). A bit of trial and error, we've managed to tag reloader to 1.4.5 (still latest helm chart) where it works. Anything newer than that - changes not detected.
@giedriuskilcauskas Thank you! The version downgrade helped. But did it detect both configMaps and secrets? I see it creates controllers for both but detects only configMaps, not secrets. Any success with that part?
@giedriuskilcauskas Thank you! The version downgrade helped. But did it detect both configMaps and secrets? I see it creates controllers for both but detects only configMaps, not secrets. Any success with that part?
We're using only secrets, so that piece works with 1.4.5 and lower
@giedriuskilcauskas Not relevant question, but do you use fine-grained secrets reload secret.reloader.stakater.com/reload or auto-reloading secret.reloader.stakater.com/auto?
The app version that you suggested works perfectly!
@giedriuskilcauskas Not relevant question, but do you use fine-grained secrets reload
secret.reloader.stakater.com/reloador auto-reloadingsecret.reloader.stakater.com/auto?The app version that you suggested works perfectly!
Actually, both. The auto annotation is kind of leftover from the legacy setup, but it's still there.
Hi! Are you changing the secret data it's annotations?
Hi! Are you changing the secret data it's annotations?
Secrets are being updated by CSI secret driver from Secrets Manager on AWS EKS. So we don't change secret's metadata in any other way.
@msafwankarim @giedriuskilcauskas
For the last few days I was working with this package and I cannot seem to find right configurations. Probably, it is me not quite understanding how things work.
As per our tight security requirements, I bootstrapped what permissions from template, so in future we can limit them. Below are all files we are using to deploy with Kustomize:
clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: reloader-role
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- list
- get
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "batch"
resources:
- cronjobs
verbs:
- list
- get
- apiGroups:
- "batch"
resources:
- jobs
verbs:
- create
- delete
- list
- get
- apiGroups:
- "apps"
resources:
- deployments
- daemonsets
- statefulsets
verbs:
- list
- get
- update
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: reloader-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: reloader-role
subjects:
- kind: ServiceAccount
name: reloader-svc-account
namespace: reloader
role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: reloader-metadata-role
namespace: reloader
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
verbs:
- list
- get
- watch
- create
- update
rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: reloader-metadata-role-binding
namespace: reloader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: reloader-metadata-role
namespace: reloader
subjects:
- kind: ServiceAccount
name: reloader-svc-account
namespace: reloader
release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: reloader
namespace: reloader
spec:
chart:
spec:
chart: reloader
sourceRef:
kind: HelmRepository
name: reloader-repo
namespace: cluster-config
interval: 5m
install:
remediation:
retries: 5
upgrade:
force: true
cleanupOnFail: true
remediation:
retries: 3
strategy: uninstall
values:
fullnameOverride: reloader
reloader:
watchGlobally: true
autoReloadAll: true
enableMetricsByNamespace: true
logLevel: debug
reloadStrategy: annotations
rbac:
enabled: false
ignoreSecrets: false
ignoreConfigMaps: false
serviceAccount:
create: true
name: reloader-svc-account
The application itself:
partial patch file:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: test-app
namespace: test
spec:
values:
metadata:
annotations:
secret.reloader.stakater.com/reload: "test-keystore"
secret.reloader.stakater.com/auto: "true"
It later gets to a deployment that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "13"
meta.helm.sh/release-name: test-app
meta.helm.sh/release-namespace: test
secret.reloader.stakater.com/auto: "true"
secret.reloader.stakater.com/reload: test-keystore
creationTimestamp: "2025-03-06T14:21:22Z"
generation: 24
labels:
app.kubernetes.io/instance: test-test-app
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: test-app
app.kubernetes.io/version: 1.16.0
helm.sh/chart: test-2.0.0
helm.toolkit.fluxcd.io/name: test-app
helm.toolkit.fluxcd.io/namespace: test
name: test-app
namespace: test
resourceVersion: "442400587"
uid: 962ec9cb-b007-45be-b732-e36883ab410b
When secret is getting rotated we get this message:
$ kubectl get events -n test
LAST SEEN TYPE REASON OBJECT MESSAGE
3m25s Normal SecretRotationComplete pod/test-app-6778f4cd84-2z28f successfully rotated K8s secret test-keystore
Rotation happens via
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
...
spec:
provider: azure
from Azure Key Vault
The reloader app in the meantime stays silent:
kubectl logs -f reloader-854f65c4b9-sszgx -n reloader
Defaulted container "reloader" out of: reloader, install-oneagent (init)
time="2025-11-05T13:38:33Z" level=info msg="Environment: Kubernetes"
time="2025-11-05T13:38:33Z" level=info msg="Starting Reloader"
time="2025-11-05T13:38:33Z" level=warning msg="KUBERNETES_NAMESPACE is unset, will detect changes in all namespaces."
time="2025-11-05T13:38:33Z" level=info msg="created controller for: configMaps"
time="2025-11-05T13:38:33Z" level=info msg="Starting Controller to watch resource type: configMaps"
time="2025-11-05T13:38:33Z" level=info msg="created controller for: secrets"
time="2025-11-05T13:38:33Z" level=info msg="Starting Controller to watch resource type: secrets"
We also checked service role permissions:
$ kubectl auth can-i get secrets --as=system:serviceaccount:reloader:reloader-svc-account
yes
$ kubectl auth can-i get configmaps --as=system:serviceaccount:reloader:reloader-svc-account
yes
I checked internally but we stuck on this and cannot move forward. Do we do anything wrong? Any further tips for debugging?
Hi! I believe this issue might be related to this one #1055
Looks like if you use both secret.reloader.stakater.com/reload: X and auto reload annotation at the same time, the first one is ignored.