[BUG] ClusterRole doesn't include secrets, only configmaps are detected
Describe the bug All configurations are ignored and only role for detecting configmaps is applied.
To Reproduce Use for deployment any Helm version higher than v2.1.5 and set
values:
reloader:
ignoreSecrets: false
The role like this will be created:
kubectl get role reloader-metadata-role -n reloader -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
meta.helm.sh/release-name: reloader
meta.helm.sh/release-namespace: reloader
creationTimestamp: "2025-11-04T09:59:42Z"
labels:
app: reloader
app.kubernetes.io/instance: reloader
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: reloader
app.kubernetes.io/version: v1.4.8
chart: reloader-2.2.3
helm.sh/chart: reloader-2.2.3
helm.toolkit.fluxcd.io/name: reloader
helm.toolkit.fluxcd.io/namespace: reloader
heritage: Helm
release: reloader
name: reloader-metadata-role
namespace: reloader
resourceVersion: "442390732"
uid: d2fd7d76-8e85-41ac-9eb9-e8e933273c0a
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- list
- get
- watch
- create
- update
Expected behavior Role should include secrets, if they are explicitly wasn't excluded.
Screenshots
Environment
- Operator Version: Helm upgrade succeeded for release reloader/reloader.v3 with chart [email protected]
- Kubernetes/OpenShift Version:
kubectl version Client Version: v1.34.1 Kustomize Version: v5.7.1 Server Version: v1.30.9
Additional context
While I am digging more and more in this issue, it is apparently role.yaml that overrides clusterrole.yaml, so this creates an issue, not the fact that role.yaml itself has that extra part with configMaps.
Hey @ChameleonTartu Thanks for letting us know the issue. To proceed further, please share complete values.yaml file.
@mahmadmujtaba I share HelmRelease that includes values:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: reloader
namespace: reloader
spec:
chart:
spec:
chart: reloader
sourceRef:
kind: HelmRepository
name: reloader-repo
namespace: cluster-config
interval: 5m
install:
remediation:
retries: 5
upgrade:
force: true
cleanupOnFail: true
remediation:
retries: 3
strategy: uninstall
values:
fullnameOverride: reloader
reloader:
watchGlobally: true
autoReloadAll: true
enableMetricsByNamespace: true
logLevel: debug
reloadStrategy: annotations
rbac:
enabled: false
ignoreSecrets: false
ignoreConfigMaps: false
serviceAccount:
create: true
name: reloader-svc-account
Given your values I don't think neither the role nor the clusterrole would be created since both of those are dependent on reloader.rbac.enabled being set to true. Since you are setting that to false, it looks normal to me that those resources are not generated.