Error while creating openshift on vmware plan
I am trying to create openshift plan on VMWare:
$ kcli -d create plan -f vmware-plan.yaml
Using sweaty-mellen as name of the plan
cluster:
- vmware-test:
client: vmware-qpk58
ctlplanes: 3
domain: openshift-vmware.local
kubetype: openshift
tag: 4.18
type: cluster
version: stable
workers: 6
Deploying Cluster entries...
Deploying Cluster vmware-test...
Couldnt find topfolder /SDDC-Datacenter/Workloads
Couldn't connect to client vmware-qpk58. Leaving...
Client definition is as below:
vmware-qpk58:
type: vsphere
host: ************
user: "********"
password: ?secret
datacenter: SDDC-Datacenter
cluster: "Cluster-1"
pool: "workload_share_dwPsq"
basefolder: /SDDC-Datacenter/Workloads/sandbox-qpk58
import_network: segment-sandbox-qpk58
I've set the path like above being suggested by this issue https://github.com/karmab/kcli/issues/611, which seems to be related to similar environment. However, when querying this VMWare environment with govc, the path is different:
$ govc ls /SDDC-Datacenter/vm/Workloads
/SDDC-Datacenter/vm/Workloads/sandbox-qpk58
But even setting the basefolder to /SDDC-Datacenter/vm/Workloads/sandbox-qpk58 the error is the same.
$ kcli -d create plan -f vmware-plan.yaml
Using jovial-ablonde as name of the plan
cluster:
- vmware-test:
client: vmware-qpk58
ctlplanes: 3
domain: openshift-vmware.local
kubetype: openshift
tag: 4.18
type: cluster
version: stable
workers: 6
Deploying Cluster entries...
Deploying Cluster vmware-test...
Couldnt find topfolder /SDDC-Datacenter/vm/Workloads
Couldn't connect to client vmware-qpk58. Leaving...
does something like listing vms work without specifying the topfolder. Also, are you sure your user has the proper rights to access the target folder?
You had a very good idea with not specifying the basefolder. Without it kcli listed all the vms which reside in the folder which I specified previously as basefolder. After hashing it out there's another problem:
$ kcli -d create plan -f vmware-plan.yaml
Using silly-superduper as name of the plan
cluster:
- vmware-test:
client: vmware-qpk58
ctlplanes: 3
domain: openshift-vmware.local
kubetype: openshift
tag: 4.18
type: cluster
version: stable
workers: 6
Deploying Cluster entries...
Deploying Cluster vmware-test...
Deploying on client vmware-qpk58
Deploying cluster vmware-test
Using stable version
Network default not found
Issue getting network default
I do not understand why kcli tries to pickup "default" network while there is "import_network" specified in client configuration, or maybe I am missing something.
It uses default network because this is the default value when deploying a kube cluster. import_network, which you set in your conf, is the value used when importing an image
anyway, just set network to the same same value in your plan
Setting the network in the plan pushes things forward, but now here's another error:
$ kcli -d create plan -f vmware-plan.yaml
Using summer-freezer as name of the plan
cluster:
- qpk58:
api_ip: 192.168.33.201
client: vmware-qpk58
ctlplanes: 3
domain: dynamic.redhatworkshops.io
ingress_ip: 192.168.33.202
kubetype: openshift
network: segment-sandbox-qpk58
pull_secret: pull-secret.json
tag: 4.18
type: cluster
version: stable
workers: 6
Deploying Cluster entries...
Deploying Cluster qpk58...
Deploying on client vmware-qpk58
Deploying cluster qpk58
Using stable version
Using existing openshift-install found in your PATH
Using installer version 4.18.4
Traceback (most recent call last):
File "/usr/bin/kcli", line 33, in <module>
sys.exit(load_entry_point('kcli==99.0', 'console_scripts', 'kcli')())
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/lib/python3.13/site-packages/kvirt/cli.py", line 5268, in cli
args.func(args)
~~~~~~~~~^^^^^^
File "/usr/lib/python3.13/site-packages/kvirt/cli.py", line 2069, in create_plan
result = config.plan(plan, ansible=ansible, url=url, path=path, container=container, inputfile=inputfile,
overrides=overrides, threaded=threaded)
File "/usr/lib/python3.13/site-packages/kvirt/config.py", line 1929, in plan
result = currentconfig.create_kube(plan, kubetype, overrides=kube_overrides)
File "/usr/lib/python3.13/site-packages/kvirt/config.py", line 2648, in create_kube
result = self.create_kube_openshift(cluster, overrides)
File "/usr/lib/python3.13/site-packages/kvirt/config.py", line 2706, in create_kube_openshift
return openshift.create(self, plandir, cluster, overrides, dnsconfig=dnsconfig)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/kvirt/cluster/openshift/__init__.py", line 998, in create
images = [v for v in k.volumes() if image in v]
~~~~~~~~~^^
File "/usr/lib/python3.13/site-packages/kvirt/providers/vsphere/__init__.py", line 998, in volumes
prefix = '' if self.restricted else f'{dev.backing.datastore.name}/'
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/pyVmomi/VmomiSupport.py", line 700, in __call__
return self.f(*args, **kwargs)
~~~~~~^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/pyVmomi/VmomiSupport.py", line 520, in _InvokeAccessor
return self._stub.InvokeAccessor(self, info)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/pyVmomi/StubAdapterAccessorImpl.py", line 47, in InvokeAccessor
raise objectContent.missingSet[0].fault
pyVmomi.VmomiSupport.vim.fault.NoPermission: (vim.fault.NoPermission) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = '',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
object = 'vim.Datastore:datastore-1011',
privilegeId = 'System.View'
}
I have no problems with creating files on this datastore using govc and the same credentials as in client configuration.
needs to reproduce