Unable to attach or mount NFS volumes
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
Version 1.23.0 (git-a067cd7742a497a5c512762b9880664d865289f1)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:52:18Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
3. What cloud provider are you using? AWS 4. What commands did you run? What is the simplest way to reproduce this issue? Git clone the nfs example from here: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs Add this annotation to pvc (under metadata) annotations: volume.beta.kubernetes.io/storage-class: ""
Followed instructions. I tried using both the IP address of the nfs-server service in nfs-pv.yaml and the fully qualified name. 5. What happened after the commands executed? The pv and pvc are successfully created and bound. However the nfs-server deployment is stuck in containercreating status with the following message (after running kubectl describe on the pod)
Normal Scheduled 4m4s default-scheduler Successfully assigned nfs/nfs-server-97b848d-smqjb to ip-172-20-51-225.ec2.internal Warning FailedMount 2m1s kubelet, ip-172-20-51-225.ec2.internal Unable to attach or mount volumes: unmounted volumes=[mypvc], unattached volumes=[mypvc kube-api-access-gkvpt]: timed out waiting for the condition Warning FailedMount 113s kubelet, ip-172-20-51-225.ec2.internal MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o nfsvers=4.2 100.67.141.247:/ /var/lib/kubelet/pods/ab18e914-ad41-4ca8-8557-5b1d9e679afe/volumes/kubernetes.io~nfs/nfs Output: mount.nfs: Connection timed out
6. What did you expect to happen? The server pod to start successfully
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
Using cluster from kubectl context: dev.k8s.local
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2022-03-26T17:29:02Z"
name: dev.k8s.local
spec:
api:
loadBalancer:
class: Classic
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://ankur-dev-k8s-state-store/dev.k8s.local
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-east-1a
name: a
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-east-1a
name: a
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
useServiceAccountExternalPermissions: true
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: 1.23.5
masterPublicName: api.dev.k8s.local
networkCIDR: 172.20.0.0/16
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
serviceAccountIssuerDiscovery:
discoveryStore: s3://kops-oidc-auth/dev.k8s.local
enableAWSOIDCProvider: true
sshAccess:
- 0.0.0.0/0
- ::/0
subnets:
- cidr: 172.20.32.0/19
name: us-east-1a
type: Public
zone: us-east-1a
topology:
dns:
type: Public
masters: public
nodes: public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-26T17:42:36Z"
generation: 1
labels:
kops.k8s.io/cluster: dev.k8s.local
name: general-worker-nodes-ig
spec:
cloudLabels:
k8s.io/cluster-autoscaler/dev.k8s.local: "1"
k8s.io/cluster-autoscaler/enabled: "1"
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t2.medium
maxSize: 1
minSize: 1
mixedInstancesPolicy:
instances:
- t2.medium
- t3.medium
- t3a.medium
onDemandAboveBase: 0
onDemandBase: 0
spotAllocationStrategy: capacity-optimized
nodeLabels:
kops.k8s.io/instancegroup: general-worker-nodes-ig
role: Node
rootVolumeEncryption: false
rootVolumeSize: 64
subnets:
- us-east-1a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-26T17:29:02Z"
labels:
kops.k8s.io/cluster: dev.k8s.local
name: master-us-east-1a
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
instanceMetadata:
httpPutResponseHopLimit: 3
httpTokens: required
machineType: t2.small
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-east-1a
role: Master
subnets:
- us-east-1a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-26T17:51:46Z"
generation: 44
labels:
kops.k8s.io/cluster: dev.k8s.local
name: raynodes-ig
spec:
cloudLabels:
k8s.io/cluster-autoscaler/dev.k8s.local: "1"
k8s.io/cluster-autoscaler/enabled: "1"
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t2.xlarge
maxSize: 0
minSize: 0
mixedInstancesPolicy:
instances:
- t2.xlarge
onDemandAboveBase: 0
onDemandBase: 0
spotAllocationStrategy: capacity-optimized
nodeLabels:
kops.k8s.io/instancegroup: raynodes-ig
role: Node
rootVolumeEncryption: false
rootVolumeSize: 64
subnets:
- us-east-1a
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
It is very unclear to me where you think kops is doing something incorrectly. The problem seems to be with an application you installed yourself, not with any core k8s components or other aspects of the cluster managed by kOps
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.