[BUG][DEPLOY] default serviceaccount not being updated during install
Describe the bug
When running the start.sh script, the following error happens:
Error from server (AlreadyExists): error when creating "prereqs/": serviceaccounts "default" already exists
To Reproduce Steps to reproduce the behavior:
- Clone the deploy repo
- Follow the instructions to run
start.sh - Witness the error!
Expected behavior kubectl apply should just update the default service account that already exists with the imagePullSecret
Desktop (please complete the following information):
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v0.0.0-master+$Format:%h$", GitCommit:"$Format:%H$", GitTreeState:"", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.2", GitCommit:"aa10b5b", GitTreeState:"clean", BuildDate:"2020-03-16T18:11:23Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Additional context
This is present in my locally cloned start.sh
https://github.com/open-cluster-management/deploy/blob/master/start.sh#L160
[root@bastion deploy]# oc get sa default -oyaml
apiVersion: v1
imagePullSecrets:
- name: default-dockercfg-r4kf9
kind: ServiceAccount
metadata:
creationTimestamp: "2020-04-07T15:33:46Z"
name: default
namespace: open-cluster-management
resourceVersion: "258550"
selfLink: /api/v1/namespaces/open-cluster-management/serviceaccounts/default
uid: 067cdba1-e067-4904-8d4b-47ff1c375eae
secrets:
- name: default-token-672pk
- name: default-dockercfg-r4kf9
@stencell what version of kubectl are you using? From the issue description I can see you included the output of kubectl version however there is no Major or Minor version specified in the Client Version.
I've pushed an "experimental" branch (https://github.com/open-cluster-management/deploy/tree/issue-54) to try to address the issue using the patchesStrategicMerge option in the kustomization.yaml file...
If this works you will probably still see an error or warning about the default sa account already existing however it should still be patched and the output of the following command should show the inclusion of the multiclusterhub-operator-pull-secret in the imagePullSecrets key:
oc get serviceaccount default -n open-cluster-management -o yaml
Ah, so the kubectl that was available on this node is the one that ships from RH along with oc in https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.8/openshift-client-linux-4.3.8.tar.gz. Here I think is just a hard link to oc, so not the "real" kubectl.
It happened to me in my environment:
$ oc version
Client Version: 4.4.0-0.nightly-2020-04-27-013217
Server Version: 4.4.0-0.nightly-2020-04-27-013217
Kubernetes Version: v1.17.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.1", GitCommit:"b9b84e0", GitTreeState:"clean", BuildDate:"2020-04-26T20:16:35Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
I guess that's because OpenShift creates a project object when you create a namespace as well as some service accounts by default (default, builder, deployer) so kustomize creates the namespace, then the service account with the proper imagepullsecret, but then it is overwritten by OpenShift.
I've not tested it but maybe you can create the namespace first, wait for the sa to be ready (I've been looking if you can 'wait' for a service account, but I'm afraid it cannot be done as it doesn't have conditions and https://github.com/kubernetes/kubernetes/issues/83094) then apply the rest of the kustomizations.
My 2 cents.
A workaround is to start with kubectl apply --openapi-patch=true -k prereqs/ then run the start.sh script afterwards.
This happened in my environment always and I had been suffering for days.
My workaround was to run it twice by updating the ./start.sh:
# Why twice? It's because there might be a potential race issue while patching the pull secret
kubectl apply -k prereqs/
sleep 2
kubectl apply -k prereqs/
Note: having the
--openapi-patch=truedidn't help