Operatorhub Catalog ARM64 Support
Bug Report
What did you do?
1. Installed OLM on my raspberry pi k3s cluster (ARM64).
I did have to change the catalog image quay.io/operatorhubio/catalog:latest to quay.io/operatorhubio/catalog:lts. There were no logs output by the pod as you would expect it just wasn't running but switch to LTS tag saw the GRPC server startup and things look healthy.
2. Installed my first operator
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: my-argocd-operator
namespace: operators
spec:
channel: alpha
name: argocd-operator
source: operatorhubio-catalog
sourceNamespace: olm
What did you expect to see?
That the operator framework would perform its magic and install argo-cd on the cluster.
What did you see instead? Under which circumstances?
$ kubectl -n operators describe sub my-argocd-operator
...
Status:
Catalog Health:
Catalog Source Ref:
API Version: operators.coreos.com/v1alpha1
Kind: CatalogSource
Name: operatorhubio-catalog
Namespace: olm
Resource Version: 622853
UID: aeef1f77-c29d-415c-a1bc-a726372b8ae9
Healthy: true
Last Updated: 2022-07-28T11:09:02Z
Conditions:
Last Transition Time: 2022-07-28T11:09:02Z
Message: all available catalogsources are healthy
Reason: AllCatalogSourcesHealthy
Status: False
Type: CatalogSourcesUnhealthy
Last Transition Time: 2022-07-28T11:10:35Z
Message: bundle unpacking failed. Reason: BackoffLimitExceeded, and Message: Job has reached the specified backoff limit
Reason: InstallCheckFailed
Status: True
Type: InstallPlanFailed
Current CSV: argocd-operator.v0.2.1
Install Plan Generation: 1
Install Plan Ref:
API Version: operators.coreos.com/v1alpha1
Kind: InstallPlan
Name: install-ztjh5
Namespace: operators
Resource Version: 625442
UID: bbb2ad75-cc7e-41b7-a59a-b368ecf65ac2
Installplan:
API Version: operators.coreos.com/v1alpha1
Kind: InstallPlan
Name: install-ztjh5
Uuid: bbb2ad75-cc7e-41b7-a59a-b368ecf65ac2
Last Updated: 2022-07-28T11:10:35Z
State: UpgradePending
Events: <none>
$ kubectl -n olm get jobs
NAME COMPLETIONS DURATION AGE
a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa 0/1 40m 40m
$ kubectl -n olm get job a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa -o yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2022-07-28T11:09:04Z"
generation: 1
labels:
controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
job-name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
namespace: olm
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: false
kind: ConfigMap
name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
uid: b4865d98-0576-46e9-ae18-3f7f6b9abb5d
resourceVersion: "625775"
uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
spec:
activeDeadlineSeconds: 600
backoffLimit: 3
completionMode: NonIndexed
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
suspend: false
template:
metadata:
creationTimestamp: null
labels:
controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
job-name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
spec:
containers:
- command:
- opm
- alpha
- bundle
- extract
- -m
- /bundle/
- -n
- olm
- -c
- a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
- -z
env:
- name: CONTAINER_IMAGE
value: quay.io/operatorhubio/argocd-operator:v0.2.1
image: quay.io/operator-framework/upstream-opm-builder:latest
imagePullPolicy: Always
name: extract
resources:
requests:
cpu: 10m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bundle
name: bundle
dnsPolicy: ClusterFirst
initContainers:
- command:
- /bin/cp
- -Rv
- /bin/cpb
- /util/cpb
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: IfNotPresent
name: util
resources:
requests:
cpu: 10m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /util
name: util
- command:
- /util/cpb
- /bundle
image: quay.io/operatorhubio/argocd-operator:v0.2.1
imagePullPolicy: Always
name: pull
resources:
requests:
cpu: 10m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bundle
name: bundle
- mountPath: /util
name: util
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: bundle
- emptyDir: {}
name: util
status:
conditions:
- lastProbeTime: "2022-07-28T11:10:33Z"
lastTransitionTime: "2022-07-28T11:10:33Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 4
ready: 0
startTime: "2022-07-28T11:09:04Z"
Environment
- operator-lifecycle-manager version:
$ grep image base/olm.yaml
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: IfNotPresent
- --util-image
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: IfNotPresent
image: quay.io/operator-framework/olm:v0.21.2
imagePullPolicy: Always
image: quay.io/operatorhubio/catalog:lts
- Kubernetes version information:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:38:26Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2+k0s", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-07-11T06:55:47Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/arm64"}
- Kubernetes cluster kind:
v1.24.3+k3s1
Additional context Already looked at this issue but didn't provide a fix for my specific problem https://github.com/operator-framework/operator-lifecycle-manager/issues/1138