Deploying the very same POD twice throws error
What steps did you take: edeoliveira:~/Documents/projects/carvel-research $ k create ns app4 && kapp deploy -a app3 -n app --into-ns app4 -f busybox-pod.yaml namespace/app4 created Target cluster 'https://kubernetes.docker.internal:6443' (nodes: docker-desktop)
Changes
Namespace Name Kind Conds. Age Op Op st. Wait to Rs Ri app3 bb Pod 4/4 t 2d delete - delete ok - app4 bb Pod - - create - reconcile - -
Op: 1 create, 1 delete, 0 update, 0 noop Wait to: 1 reconcile, 1 delete, 0 noop
Continue? [yN]: y
10:03:34AM: ---- applying 2 changes [0/2 done] ---- 10:03:34AM: delete pod/bb (v1) namespace: app3 10:03:35AM: create pod/bb (v1) namespace: app4 10:03:35AM: ---- waiting on 2 changes [0/2 done] ---- 10:03:35AM: ongoing: delete pod/bb (v1) namespace: app3 10:03:35AM: ongoing: reconcile pod/bb (v1) namespace: app4 10:03:35AM: ^ Pending: ContainerCreating 10:03:38AM: ok: reconcile pod/bb (v1) namespace: app4 10:03:38AM: ---- waiting on 1 changes [1/2 done] ---- 10:04:11AM: ok: delete pod/bb (v1) namespace: app3 10:04:11AM: ---- applying complete [2/2 done] ---- 10:04:11AM: ---- waiting complete [2/2 done] ----
Succeeded
edeoliveira:~/Documents/projects/carvel-research $ k create ns app4 && kapp deploy -a app3 -n app --into-ns app4 -f busybox-pod.yaml Error from server (AlreadyExists): namespaces "app4" already exists
edeoliveira:~/Documents/projects/carvel-research $ kapp deploy -a app3 -n app --into-ns app4 -f busybox-pod.yaml Target cluster 'https://kubernetes.docker.internal:6443' (nodes: docker-desktop)
Changes
Namespace Name Kind Conds. Age Op Op st. Wait to Rs Ri app4 bb Pod - 51s update - reconcile ok -
Op: 0 create, 0 delete, 1 update, 0 noop Wait to: 1 reconcile, 0 delete, 0 noop
Continue? [yN]: y
10:04:27AM: ---- applying 1 changes [0/1 done] ---- 10:04:27AM: update pod/bb (v1) namespace: app4
kapp: Error: Applying update pod/bb (v1) namespace: app4:
Updating resource pod/bb (v1) namespace: app4:
Pod "bb" is invalid: spec:
Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
core.PodSpec{
- Volumes: nil,
+ Volumes: []core.Volume{
+ {
+ Name: "default-token-gkbbh",
+ VolumeSource: core.VolumeSource{
+ Secret: &core.SecretVolumeSource{SecretName: "default-token-gkbbh", DefaultMode: &420},
+ },
+ },
+ },
InitContainers: nil,
Containers: []core.Container{
{
... // 7 identical fields
Env: nil,
Resources: core.ResourceRequirements{},
- VolumeMounts: nil,
+ VolumeMounts: []core.VolumeMount{
+ {
+ Name: "default-token-gkbbh",
+ ReadOnly: true,
+ MountPath: "/var/run/secrets/kubernetes.io/serviceaccount",
+ },
+ },
VolumeDevices: nil,
LivenessProbe: nil,
... // 10 identical fields
},
},
EphemeralContainers: nil,
RestartPolicy: "Always",
... // 2 identical fields
DNSPolicy: "ClusterFirst",
NodeSelector: nil,
- ServiceAccountName: "",
+ ServiceAccountName: "default",
AutomountServiceAccountToken: nil,
NodeName: "docker-desktop",
... // 18 identical fields
}
(reason: Invalid)
What happened: The same POD yaml deployed twice resulted on invalid field changes error. Nothing was changed!
What did you expect:
A message saying No changes found in the new configuration
Anything else you would like to add: [Additional information that will assist in solving the issue.]
Environment:
-
kapp version (use
kapp --version): kapp version 0.36.0 -
OS (e.g. from
/etc/os-release): MacOs 11.2.3 -
Kubernetes version (use
kubectl version) Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-21T20:23:45Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"} Docker version 20.10.2, build 2291f61 (embedded k8s cluster)
Hi @eduardoroliveira, I believe the Forbidden: pod updates may not change fields... error you see is because Kubernetes Pods status is immutable. kapp has a built in rule to prefer user specified values over cluster values for status when calculating the change set, and since it's provided in the manifest, kapp will try to apply that change. When you deploy the Pod, the cluster modifies the status field to indicate that the Pod is running, then when it's redeployed it tries to overwrite it to status: {}, but since the field is immutable, it errors.
One way to address this is to remove the status field from your Pod definition.
Another way is to include a kapp Config along with the other inputs to change kapp's behavior to prefer cluster values for the status field over what is in the manifest.
For example, the manifest.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bb
name: bb
spec:
containers:
- args:
- sleep
- "3600"
image: busybox
name: bb
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
and kapp-config.yml
apiVersion: kapp.k14s.io/v1alpha1
kind: Config
rebaseRules:
- path: [status]
type: copy
sources: [existing, new]
resourceMatchers:
- apiVersionKindMatcher: {apiVersion: v1, kind: Pod}
run kapp deploy -a app3 -n app --into-ns app4 -f busybox-pod.yaml -f kapp-config.yml (hint: use -c to view changes) and kapp should recognize that no changes were made.
Hope that helps, if this doesn't solve your issue could you provide the manifests you provided to kapp and the output of kapp deploy -a app3 -n app --into-ns app4 -f busybox-pod.yaml -f kapp-config.yml -c so we can debug further
@eduardoroliveira are you still encountering this issue? If not, can we close this out?