APIManager Installation - rake aborted! Writable paths check failed
Environment:
- OpenShift 4.8.10 on RHV 4.4.3
- 3scale Operator - 0.7.0
Cluster Nodes Info:

Issue Description:
After successful installation of the 3scale operator from the OperatorHub, creating an instance of APIManager is unsuccessful, resulting in a deploy prehook repeated failing, preventing the APIManager instance from completely and successfully deploying.
The APIManager configuration used:
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
name: example-apimanager
namespace: demo-integration-001
spec:
resourceRequirementsEnabled: true
wildcardDomain: apps.okd.thekeunster.local
SC, PV, PVCs:
oc get sc,pvc,pv
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/ovirt-csi-sc (default) csi.ovirt.org Delete Immediate false 4d7h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/backend-redis-storage Bound pvc-d921d1d3-53f9-42ca-959a-92f743b9532b 1Gi RWO ovirt-csi-sc 11s
persistentvolumeclaim/mysql-storage Bound pvc-224db5d2-bc05-48b7-95c9-7b62ffb74a20 1Gi RWO ovirt-csi-sc 11s
persistentvolumeclaim/system-redis-storage Bound pvc-d31fed60-31ef-44b9-b767-e8e5f0faa3d4 1Gi RWO ovirt-csi-sc 11s
persistentvolumeclaim/system-storage Bound pvc-55794ed7-e1da-4351-87c9-3b9deaf60395 100Mi RWX ovirt-csi-sc 10s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-224db5d2-bc05-48b7-95c9-7b62ffb74a20 1Gi RWO Delete Bound demo-integration-001/mysql-storage ovirt-csi-sc 10s
persistentvolume/pvc-41041b9e-1778-4527-bfb4-3e2610e43b3d 100Gi RWO Delete Bound dog-detector/data-0-object-detection-kafka-0 ovirt-csi-sc 3d2h
persistentvolume/pvc-55794ed7-e1da-4351-87c9-3b9deaf60395 100Mi RWX Delete Bound demo-integration-001/system-storage ovirt-csi-sc 9s
persistentvolume/pvc-d31fed60-31ef-44b9-b767-e8e5f0faa3d4 1Gi RWO Delete Bound demo-integration-001/system-redis-storage ovirt-csi-sc 10s
persistentvolume/pvc-d921d1d3-53f9-42ca-959a-92f743b9532b 1Gi RWO Delete Bound demo-integration-001/backend-redis-storage ovirt-csi-sc 10s
persistentvolume/pvc-e202d9d2-81db-41b5-b77f-1cc806d6fecb 100Gi RWO Delete Bound openshift-image-registry/image-registry-storage ovirt-csi-sc 4d7h
persistentvolume/pvc-f6b85320-82e1-4640-81ab-99aee7a05235 100Gi RWO Delete Bound dog-detector/data-object-detection-zookeeper-0 ovirt-csi-sc 3d2h
Failed Pods and Pods that are in init:

The system-app-1-hook-pre pod logs the following message:
I, [2021-09-13T18:02:08.080740 #1] INFO -- : ActiveMerchant MODE set to 'production'
W, [2021-09-13T18:02:08.135038 #1] WARN -- [Bugsnag]: No valid API key has been set, notifications will not be sent
I, [2021-09-13T18:02:08.419811 #1] INFO -- : [Core] Using http://backend-listener:3000/internal/ as URL
OpenIdAuthentication.store is nil. Using in-memory store.
Creating scope :admins. Overwriting existing method User.admins.
Creating scope :by_name. Overwriting existing method Cinstance.by_name.
[core] non-native log levels verbose, notice, critical emulated using UNKNOWN severity
Backend Internal API version 3.1.0 status: ok
Connected to mysql2://root@system-mysql/system
Connected to redis://system-redis:6379/1
rake aborted!
Writable paths check failed:
- /opt/system/public/system
/opt/system/lib/tasks/openshift.rake:26:in `block (2 levels) in <top (required)>'
/opt/system/vendor/bundle/ruby/2.5.0/gems/bugsnag-6.11.1/lib/bugsnag/integrations/rake.rb:18:in `execute_with_bugsnag'
/opt/system/lib/tasks/openshift.rake:4:in `block (2 levels) in <top (required)>'
/opt/system/vendor/bundle/ruby/2.5.0/gems/bugsnag-6.11.1/lib/bugsnag/integrations/rake.rb:18:in `execute_with_bugsnag'
/opt/system/vendor/bundle/ruby/2.5.0/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/cli/exec.rb:74:in `load'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/cli/exec.rb:74:in `kernel_load'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/cli/exec.rb:28:in `run'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/cli.rb:463:in `exec'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in `invoke_command'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/vendor/thor/lib/thor.rb:387:in `dispatch'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/cli.rb:27:in `dispatch'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/vendor/thor/lib/thor/base.rb:466:in `start'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/cli.rb:18:in `start'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/exe/bundle:30:in `block in <top (required)>'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/lib/bundler/friendly_errors.rb:124:in `with_friendly_errors'
/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/bundler-1.17.3/exe/bundle:22:in `<top (required)>'
/opt/rh/rh-ruby25/root/usr/local/bin/bundle:23:in `load'
/opt/rh/rh-ruby25/root/usr/local/bin/bundle:23:in `<main>'
Tasks: TOP => openshift:check_writable
(See full trace by running task with --trace)
+1 Exact same issue on OCP 4.5 / OKD 4.5
We had this issue. Checking permissions on mount path /opt/system/public/system, we found out that group was wrong and did not have write permissions. Deleting de PV bounded to system-storage PVC and using a new one solved this and mount path now has write permissions and group is 1000.
We had some issues with the storage service on the days before this, so we think it had something to do with that.
+1 Exact same issue on OCP 4.10
I am afraid we cannot do much to help you about this. This isssue is highly coupled with the RWX storage provided by the OCP deployment. In our testing env RWX works fine with the DeploymentConfigs created by the operator.
The only thing I can think of is to "play" with the pod's security context to configure volume permission and ownership change policy until you find some configuration that works for your specific environment. The pod's security context can be customized in the system-app DeploymentConfig, at the podTemplate spec (.spec.template.spec.securityContext). The operator will allow you to set a custom value for the security context directly in the DC (it will not revert back changes added).
If you succeed, let us know so we can add it to the documentation for future reference.
Closing as no response since May 2022