Mounting a Persistent Volume Claim as Volume within a Pod's Container spec doesn't seem to work.
What happened (please include outputs or screenshots):
A pod is created from the manifest below, but the volume meant to target a persistent_volume_claim is instead created as EmptyDir, and container volume_mounts meant to target this volume are skipped entirely.
Specifically, when I run kubectl describe pod/test-pod, its container has no mount associated with the target name, and I see the below for the volume that should be a PersistentVolumeClaim:
Volumes:
vulcan-cache:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
What you expected to happen: I expect the pod to be created with
Volumes:
vulcan-cache:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vulcan-cache-claim
ReadOnly: false
and its container to have
Mounts:
/running from vulcan-cache (rw)
I was able to successfully produce this by spinning up the pod up from an equivalent yml file and using kubectl directly.
How to reproduce it (as minimally and precisely as possible):
- Set up a persistent volume and persistent volume claim. Create yml files containing the below...
Persistent volume config: (Adjust 'path' as needed to something that exists on your k8s node)
apiVersion: v1
kind: PersistentVolume
metadata:
name: vulcan-cache
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
hostPath:
path: "/home"
type: Directory
Persistent Volume Claim config:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vulcan-cache-claim
namespace: default
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
volumeName: vulcan-cache
Then run kubectl create -f path/to/each/config.yml for both files.
- Use kubernetes client to try to spin up a pod from this manifest. (assuming you don't need or want description of how to establish
kub_clihere)
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': "test-pod",
'namespace': 'default',
},
'spec': {
"volumes": [{
"name": "vulcan-cache",
"persistent_volume_claim": {"claim_name": "vulcan-cache-claim"},
}],
'containers': [{
'name': 'test-container',
'image': 'nginx',
'image_pull_policy': 'IfNotPresent',
"args":
["ls", "/running"],
"volume_mounts": [{
"name": "vulcan-cache",
"mount_path": "/running",
}]
}],
}
}
kub_cli.create_namespaced_pod(body=pod_manifest, namespace='default')
Anything else we need to know?:
I am newer to working with kubernetes, but I believe this methodology is the intended way to mount persistent storage to pod containers, https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
Environment:
- Kubernetes version (
kubectl version):
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
- OS (e.g., MacOS 10.13.6): Linux, Pop!_OS 22.04
- Python version (
python --version): 3.10.12 and 3.8.17. (MRE tested directly only with 3.10.12) - Python client version (
pip list | grep kubernetes): 29.0.0
/assign
I will try to reproduce the issue and find the root cause.
after debug, it looks like, in this case, the function sanitize_for_serialization can't serialize the pod_manifest properly.
For example, the field persistent_volume_claim can't be mapped to persistentVolumeClaim, hence persistent_volume_claim can't be recognized by Kubernetes, and the volume type is set to emptydir by default.
I will investigate deeply to figure it out.
Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example
@dtm2451, you need to modify the pod_manifest, all the snake case fields must be converted to camel case.
e.g. persistent_volume_claim => persistentVolumeClaim
Or you can choose the alternative I mentioned in former comment.
Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example
Oh oh! Thank you so much for your time investigating.
Sounds like this is user error then on my side, but a warning-add would be nice! I didn't catch that I'd left this portion of my manifest in snake_case rather then camelCase, as is clearly the way all key names work in the python client! Some warning when elements are skipped due to such conversion failure would be VERY nice!
Wait actually, I responded too quickly there.
My understanding of the python client is that fields are designed around snake_case conversions of what one would normally provide in camelCase directly to kubectl. That is what I built towards here -- exactly the path you point towards in:
Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example
So it does still seem like a bug in the client to me if the snake_case version of persistent_volume_claim is incorrect here!
FWIW, contrary to my understanding from the documentation, but perhaps it is my understanding that is wrong?, when I swap to using camelCase (not the seemingly desired snake_case) for the entirety of my pod_manifest I can produce the pod I want from kub_cli.create_namespaced_pod(body=pod_manifest, namespace='default')
For example, in what I understand to be documentation of how to define a V1Volume for the python client, the field "persistent_volume_claim" (not "persistentVolumeClaim") is typed as V1PersistentVolumeClaimVolumeSource, and following that link we also find "claim_name" and "read_only" fields (not "claimName" and "readOnly").
@dtm2451, I couldn't agree with you more that the fields are designed around snake_case. For this case, the difference is the type of request body, if the body type is pure json (like hard code), the python client will work as kubectl, for this kind of cases, it's not reasonable to modify json via python client. If the body is a Kubernetes resource object instantiated by client functions, the snake_case is supported does make sense. I hope my understanding can explain your question!
I'm not sure I quite follow the logic behind
for this kind of cases, it's not reasonable to modify json via python client
fully. Specifically because the case here is a python dict, which is ofc similar to json yet fully python native. I suppose I'm simply curious for more detail of why it becomes unreasonable to parse and modify for the client. Is there a specific function that I should be passing the pod_manifest dict through before handing it to create_namespaced_pod perhaps?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.