dir not created on node
Thought I'd try out storageos as it appears to fill a nice gap for me running k8s on a KVM cluster. I followed the dynamic provisioning guide
and got the following:
kubectl describe pod test-storageos-redis-sc-pvc
...
Normal Scheduled 18m default-scheduler Successfully assigned test-storageos-redis-sc-pvc to node2
Normal SuccessfulMountVolume 18m kubelet, node2 MountVolume.SetUp succeeded for volume "default-token-bsrwb"
Warning FailedMount 3m (x5 over 18m) kubelet, node2 MountVolume.SetUp failed for volume "pvc-6f60a607-94c8-11e8-bbee-001c42db58b8" : stat /var/lib/storageos/volumes/2bd45088-9e21-3809-ee95-790ee44f4ff0: no such file or directory
Warning FailedMount 1m (x11 over 18m) kubelet, node2 MountVolume.SetUp failed for volume "pvc-6f60a607-94c8-11e8-bbee-001c42db58b8" : no such volume
Warning FailedMount 41s (x8 over 16m) kubelet, node2 Unable to mount volumes for pod "test-storageos-redis-sc-pvc_default(b90e39a4-94c8-11e8-bbee-001c42db58b8)": timeout expired waiting for volumes to attach/mount for pod "default"/"test-storageos-redis-sc-pvc". list of unattached/unmounted volumes=[redis-data]
kubectl get storageclass
NAME PROVISIONER AGE
fast kubernetes.io/storageos 31m
kubectl get pvc
fast0001 Bound pvc-6f60a607-94c8-11e8-bbee-001c42db58b8 8Gi RWO fast 27m
kubectl get pv
pvc-6f60a607-94c8-11e8-bbee-001c42db58b8 8Gi RWO Delete Bound default/fast0001 fast 26m
On the host the volume is not created consistent with the kubelet logs ls -l /var/lib/storageos/volumes/ total 0
Hi @barrymac, that is the result of running K8S without enabling MountPropagation.
Check out https://docs.storageos.com/docs/install/kubernetes/index point 3 of the prerequisites.
As a summarise:
- Append flag
--feature-gates MountPropagation=trueto the deployments kube-apiserver, usually found under /etc/kubernetes/manifests in the master node. - Add flag in the kubelet service config
KUBELET_EXTRA_ARGS=--feature-gates=MountPropagation=truefor every one of your nodes. For systemd, this is usually located in /etc/systemd/system/
After that you need to apply the manifest in the master, kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.manifest and restart the kubelet systemd service on each of your machines.