helm icon indicating copy to clipboard operation
helm copied to clipboard

reject the chart install/upgrade in presence of different chart in the same namespace with the same release name

Open kabakaev opened this issue 3 years ago • 5 comments

TL/DR: helm silently deletes all resources of another chart deployed with the same release name in the same namespace, including PVC! Helm should ether reject installation if it conflicts with existing release of another chart, or include the chart name as a release secret prefix/suffix to prevent the conflict in the first place.

Luckily, I had a full backup, but some other guy lost data because of this bug.

Backgroud: I wanted to install a nextcloud chart and to let the syncthing chart care about replication of data to a backup location. The Helm release name is often used as a prefix for kube resource names. Hence, I wanted to deploy both charts with the same nc release name to keep the pod names short.

The syncthing chart created a deployment and a PVC:

# helm upgrade --install --namespace=nc --create-namespace nc k8s-at-home/syncthing -f nc.yaml
Release "nc" has been upgraded. Happy Helming!
# helm -n nc ls
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
nc      nc              9               2022-11-04 01:28:59.824973961 +0100 CET deployed        syncthing-3.5.2 1.18.2     
# kubectl -n nc get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nc-syncthing-data   Bound    pvc-83d66de5-102e-4b02-bf78-bee1b4f3d2ac   400Gi      RWX            rook-cephfs    53s
# kubectl -n nc get po
NAME                            READY   STATUS    RESTARTS   AGE
nc-syncthing-7b8c697f74-qxfq4   1/1     Running   0          60s

Everything runs fine. Note that helm ls shows CHART=syncthing-3.5.2.

Then, I've installed the nextcloud chart in the same namespace, which deleted the deployment and the PVC of the syncthing chart:

# helm upgrade  --post-renderer ./kustomize/kustomize.sh --install --namespace=nc --create-namespace nc nextcloud/nextcloud -f values.yaml
Release "nc" has been upgraded. Happy Helming!
# kubectl  -n nc get pvc
No resources found in nc namespace.
# kubectl  -n nc get po
NAME                            READY   STATUS    RESTARTS   AGE
nc-nextcloud-596bd84984-9kcf8   0/2     Pending   0          15s
# helm -n nc ls
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
nc      nc              10              2022-11-04 01:30:53.743987719 +0100 CET deployed        nextcloud-3.2.0 24.0.5 

The nc-nextcloud pod is pending because it waits for the deleted PVC :)

Note that helm ls shows the chart name changed to nextcloud-3.2.0.

I would expect Helm to reject the install/upgrade operation if the chart name changes.

Alternatively, each Chart.yaml has name: field, which can be used as a suffix for the helm secret. For example, installation of two charts described above would lead to the following effect:

# helm upgrade --install --namespace=nc --create-namespace nc k8s-at-home/syncthing -f nc.yaml
# helm upgrade  --post-renderer ./kustomize/kustomize.sh --install --namespace=nc --create-namespace nc nextcloud/nextcloud -f values.yaml
# helm -n nc ls
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
nc-syncthing      nc              1               2022-11-04 01:28:59.824973961 +0100 CET deployed        syncthing-3.5.2 1.18.2     
nc-nextcloud      nc              1               2022-11-04 01:30:53.743987719 +0100 CET deployed        nextcloud-3.2.0 24.0.5 
# kubectl get secrets
sh.helm.release.v1.nc.nextcloud.v1    helm.sh/release.v1   1      6m
sh.helm.release.v1.nc.syncthing.v1    helm.sh/release.v1   1      4m

Output of helm version:

version.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.18.7"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.6+k3s1", GitCommit:"a8e0c66d1a90a2bbc4ffa975129ca35756cc7c14", GitTreeState:"clean", BuildDate:"2022-09-28T16:52:07Z", GoVersion:"go1.18.6", Compiler:"gc", Platform:"linux/amd64"}

Platform: k3s

kabakaev avatar Nov 04 '22 01:11 kabakaev

@kabakaev Thanks for your issue. but could you tell me why you install releases with the same release name but the chart is not the same? why not use the new release name?

yxxhero avatar Nov 04 '22 04:11 yxxhero

I tried to do it out of convenience.

Most public charts use the release name as prefix for resource names. Although, I can (and did) choose different release names for charts, the resulting names are ugly.

Which of the following pod names would you prefer?

Variant 1: have to avoid the release name conflicts, as is now:

# kubectl get pods
dev-postgresql-postgresql-0
dev-backend-backend-abc-123
dev-webapp-webapp-xyz-789
prod-postgresql-postgresql-0
prod-backend-backend-123-abc
prod-webapp-webapp-789-xyz

Variant 2: different charts may have the same release name:

# kubectl get pods
dev-postgresql-0
dev-backend-abc-123
dev-webapp-xyz-789
prod-postgresql-0
prod-backend-123-abc
prod-webapp-789-xyz

You may ask, why don't I deploy everything as a single chart? It's surely the right way to do, except when the charts are maintained by different teams, as in my initial use case. I could probably write my own chart and pull in the actual charts as dependencies; it's a good workaround for me. Not sure whether a lot of Helm users ever went beyond calling the helm install on a bunch of public charts.

But the point of this issue is not about the right workaround for the release name conflict.

Instead, I would like Helm to either warn me about the release name conflict or to avoid such conflicts in the first place.

If two people managed to delete their data, then it's not a support question, but a design issue.

kabakaev avatar Nov 04 '22 15:11 kabakaev

@kabakaev I got you. you should use helm install.

[root@devops nginx]# helm install test .
NAME: test
LAST DEPLOYED: Sat Nov  5 08:23:54 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=test" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
[root@devops nginx]# helm install test .
Error: INSTALLATION FAILED: cannot re-use a name that is still in use

yxxhero avatar Nov 05 '22 00:11 yxxhero

I'd install the two charts in different namespaces. If that's not possible, you might be able to work around it by using kustomize as a post renderer

joejulian avatar Nov 08 '22 22:11 joejulian

Many charts (though certainly not all) allow you to override the release name with a value, so you could do something like helm upgrade --install ... nc-syncthing k8s-at-home/syncthing --set nameOverride=nc (that's just an example, I don't know if that specific chart supports such an override).

Edit: That said, in the dev/prod example I'd almost certainly put those in separate namespaces as @joejulian suggested; my suggestion was more targeted at two applications which directly interact in such a way that it is useful to have them be in the same namespace for other reasons.

philomory avatar Nov 13 '22 20:11 philomory

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

github-actions[bot] avatar Feb 12 '23 00:02 github-actions[bot]

Seems like these answers didn't spark any additional questions, so I'm going to go ahead and close this.

joejulian avatar Feb 12 '23 18:02 joejulian