StackStorm-ha chart doesn't remove old initContainers for packs
SUMMARY
We have a deployment in k8s (GKE) using stackstorm-ha, with some minor custom modifications.
As we deploy the helm chart with different versions of our custom pack as a container Helm keeps on adding new initContainers but not removing the old ones.
# Container specs for the ST2 "packs" containers
st2:
packs:
images:
- repository: us.gcr.io/our-gpc-project-ops/our-gcp-repo
name: st2packs
tag: a_docker_tag
pullPolicy: IfNotPresent
- repository: us.gcr.io/our-gpc-project-ops/our-gcp-repo
name: our_custom_st2pack
tag: a_docker_tag
pullPolicy: IfNotPresent
kubectl describe deployment stackstorm-st2actionrunner
....
st2-custom-pack-663379d8242b99b798beba93f3825a593dca1bdd:
Image: us.gcr.io/our-gcp-project-ops/our-gcp-repo/st2packs:a_docker_tag
Port: <none>
Host Port: <none>
Command:
sh
-ec
/bin/cp -aR /opt/stackstorm/packs/. /opt/stackstorm/packs-shared &&
/bin/cp -aR /opt/stackstorm/virtualenvs/. /opt/stackstorm/virtualenvs-shared
Requests:
cpu: 250m
ephemeral-storage: 3Gi
memory: 1Gi
Environment: <none>
Mounts:
/opt/stackstorm/packs-shared from st2-packs-vol (rw)
/opt/stackstorm/virtualenvs-shared from st2-virtualenvs-vol (rw)
st2-custom-pack-20d5d3a41a615ed15c29a205b9527c160344c38a:
Image: us.gcr.io/our-gcp-project-ops/our-gcp-repo/st2packs:another_docker_tag
Port: <none>
Host Port: <none>
Command:
sh
-ec
/bin/cp -aR /opt/stackstorm/packs/. /opt/stackstorm/packs-shared &&
/bin/cp -aR /opt/stackstorm/virtualenvs/. /opt/stackstorm/virtualenvs-shared
Requests:
cpu: 250m
ephemeral-storage: 3Gi
memory: 1Gi
Environment: <none>
Mounts:
/opt/stackstorm/packs-shared from st2-packs-vol (rw)
/opt/stackstorm/virtualenvs-shared from st2-virtualenvs-vol (rw)
...
Each run of the helm chart with an updated custom pack adds an init container
STACKSTORM VERSION
Paste the output of st2 --version: st2 3.5.0, on Python 3.6.9
OS, environment, install method
Kubernetes, GCP, slightly modified stackstorm-ha helm chart. Modifications are mostly JWT login support for GCP IAP
Steps to reproduce the problem
- Deploy stackstorm-ha with a custom pack
- Observe the single initContainer for the custom pack
- Upgrade stackstorm-ha with the custom pack with a different docker tag and contents
- Observe an additional initContainer in the deployment
Expected Results
A single initContainer for the new version replacing the old version of the custom pack
Actual Results
Duplicate initContainer blocks in the deployment
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
Workaround: Delete the deployments for the affected pods (not just the pods themselves) and re-run the helm chart. This should not destroy any state stored in Mongo or Redis or RabbitMQ
This belongs to https://github.com/stackStorm/stackstorm-ha project and repository. Transferred there from stackstorm/st2.
Still happens.
We found using ArgoCD to manage worked around this issue.
Hope this helps
On Sun, May 8, 2022, 7:43 AM alexander-pwatcher @.***> wrote:
Still happens.
— Reply to this email directly, view it on GitHub https://github.com/StackStorm/stackstorm-k8s/issues/260#issuecomment-1120412154, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABPWAWTKWAVVVWSLJ2CWNADVI6ZIXANCNFSM5GTMHT3Q . You are receiving this because you authored the thread.Message ID: @.***>