helm-controller icon indicating copy to clipboard operation
helm-controller copied to clipboard

Drift mode should detect extra properties

Open darkweaver87 opened this issue 1 year ago • 5 comments

Hello,

When drift mode is set either to warn or enabled, it should detect extra map keys and extra list objects. For instance if an HelmRelease installs a Deployment with some environment variables set, it should detect this deployment has some manually added extra environment variables for instance and with correction enabled, remove those.

Thanks,

Rémi

darkweaver87 avatar Feb 14 '24 15:02 darkweaver87

As stated here, we compare object from the helm storage for a given release with existing versions in cluster. This is done using ssa dry-run, i.e. the kube-api server does it. I believe that what you describe should already be implemented. Can you provide more details on your observation?

souleb avatar Apr 17 '24 13:04 souleb

(using flux 2.2.3 )

Just modified one of my Deployments which is installed via a HelmRelease and with driftmode enabled and added a dns override

spec:
  template:
    spec:
      hostAliases:
      - hostnames:
        - somehostname
        ip: 127.0.0.1

and this is not detected by flux

hoerup avatar Jun 17 '24 10:06 hoerup

Hi, we see same issue. Controller sees only change the values which exist on deployment, but extra env variables are not detected as drift.

flux check
► checking prerequisites
✔ Kubernetes 1.32.0 >=1.30.0-0
► checking version in cluster
✔ distribution: flux-v2.5.1
✔ bootstrapped: true
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v1.2.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v1.5.1
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v1.5.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v1.5.0
► checking crds
✔ alerts.notification.toolkit.fluxcd.io/v1beta3
✔ buckets.source.toolkit.fluxcd.io/v1
✔ gitrepositories.source.toolkit.fluxcd.io/v1
✔ helmcharts.source.toolkit.fluxcd.io/v1
✔ helmreleases.helm.toolkit.fluxcd.io/v2
✔ helmrepositories.source.toolkit.fluxcd.io/v1
✔ kustomizations.kustomize.toolkit.fluxcd.io/v1
✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2
✔ providers.notification.toolkit.fluxcd.io/v1beta3
✔ receivers.notification.toolkit.fluxcd.io/v1
✔ all checks passed

I use for the testing this example repo https://github.com/fluxcd/flux2-kustomize-helm-example and play with staging cluster.

Steps to reproduce: add spec section here https://github.com/fluxcd/flux2-kustomize-helm-example/blob/main/apps/staging/podinfo-values.yaml#L6

. . .
spec:
  driftDetection:
    mode: enabled
. . . 

then force reconciliation

flux reconcile kustomization infra-configs infra-controllers apps -n flux-system --with-source

edit env vars manaully

kubectl edit deployment podinfo -n podinfo

Add few new env:s, eg:

     Environment:
      PODINFO_UI_COLOR:  #34577c
      MANUAL: extra # new extra
      MANUAL_MORE:  extra_more # new extra 

run

flux reconcile kustomization infra-configs infra-controllers apps -n flux-system --with-source
flux reconcile hr podinfo -n podinfo --with-source

check deployment after

 kubectl describe deployment podinfo -n podinfo | grep "MANUAL"

ViacheslavKudinov avatar May 02 '25 07:05 ViacheslavKudinov

Hi, i guess i am running into the same issue at the moment.

I have added some env variables and unfortunately included a typo in the env variable. after fixing the typo i now have the env variable with typo and without typo in my helmrelease..

is there any good workaround for this? or a real solution?

eloo-abi avatar Aug 20 '25 08:08 eloo-abi

I also have a case of this I think.

When adding a new label to .spec.selector in a Service it is being removed by the drift detection:

 spec:
   selector:
     app.kubernetes.io/instance: garage-garage
     app.kubernetes.io/name: garage
+    route: "true"

(the route: "true" field in the selector is removed)

But when adding the same label to a statefulset (.spec.template.metadata.labels), it is not being removed:

 spec:
   template:
     metadata:
       annotations:
         checksum/config: c2623ef845788ef6a58f830da17dbb71b60f8e5010906f71898fdd6618a7de49
       creationTimestamp: null
       labels:
         app.kubernetes.io/instance: garage-garage
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/name: garage
         app.kubernetes.io/version: v2.1.0
         helm.sh/chart: garage-0.9.1
+        route: "true"

Both objects (Service .spec.selector & StatefulSet .spec.template.metadata.labels) are present in the helm manifest (checked with helm get manifest -n flux-system garage-garage).

DerRockWolf avatar Oct 06 '25 18:10 DerRockWolf