emptyDir volume in `podman kube` not shared among containers in pod
Issue Description
When running a Kubernetes pod, and volume-mounting a tmpfs-backed (medium: 'Memory') emptyDir volume into multiple containers, each gets presented a different tmpfs. When using medium: '', ie. disk-backed (on btrfs, in my case), it does work as expected.
Steps to reproduce the issue
Run podman kube play podman-emptydir-bug.yml, with the following YAML:
---
apiVersion: v1
kind: Pod
metadata:
name: podman-emptydir-bug
spec:
initContainers:
- name: create-test-file
image: docker.io/library/alpine:3.21.0
command: ['sh', '-xc', 'mount | grep /app; echo hello >/app/x-test-file; ls -al /app']
volumeMounts:
- name: podman-emptydir-bug-temp
mountPath: /app
containers:
- name: use-test-file
image: docker.io/library/alpine:3.21.0
command: ['sh', '-xc', 'mount | grep /app; ls -al /app; cat /app/x-test-file; sleep 60']
volumeMounts:
- name: podman-emptydir-bug-temp
mountPath: /app
volumes:
- name: podman-emptydir-bug-temp
# FIXME: create-test-file container and use-test-file container see different tmpfs
emptyDir: { medium: 'Memory' }
# NOTE: filesystem-backed emptyDir does work as expected
#emptyDir: { medium: '' }
So this creates a volume named podman-emptydir-bug-temp, and in the initContainer named create-test-file, a file is created, which should be visible in the main use-test-file container.
Clean up with podman kube down podman-emptydir-bug.yml.
Describe the results you received
With emptyDir: { medium: 'Memory' }, I see the following in the journal:
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.131689935 +0100 CET m=+0.203410974 container init 83253e16cd5668cac3e2e7ea239af7385574f782ca435d9a76376ba623c0d09b (image=localhost/podman-pause:5.3.1-1732225906, name=56135e9accd8-infra, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65, io.buildah.version=1.38.0)
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.133495745 +0100 CET m=+0.205216784 container start 83253e16cd5668cac3e2e7ea239af7385574f782ca435d9a76376ba623c0d09b (image=localhost/podman-pause:5.3.1-1732225906, name=56135e9accd8-infra, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65, io.buildah.version=1.38.0)
Jan 03 20:54:16 archlinux systemd[1225]: Started libpod-conmon-2020ba04292554b1a5dce102011e18dbe35f5739813d4a4f9d2a80a17f1cf59f.scope.
Jan 03 20:54:16 archlinux systemd[1225]: Started libcrun container.
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.152648111 +0100 CET m=+0.224369150 container init 2020ba04292554b1a5dce102011e18dbe35f5739813d4a4f9d2a80a17f1cf59f (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65)
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.154214746 +0100 CET m=+0.225935785 container start 2020ba04292554b1a5dce102011e18dbe35f5739813d4a4f9d2a80a17f1cf59f (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65)
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: + mount
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: + grep /app
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: tmpfs on /app type tmpfs (rw,nosuid,nodev,relatime,uid=1000,gid=1000,inode64)
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: + echo hello
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: + ls -al /app
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: total 4
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: drwxrwxrwt 2 root root 60 Jan 3 19:54 .
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: dr-xr-xr-x 1 root root 120 Jan 3 19:54 ..
Jan 03 20:54:16 archlinux podman-emptydir-bug-create-test-file[4156521]: -rw-r--r-- 1 root root 6 Jan 3 19:54 x-test-file
Jan 03 20:54:16 archlinux podman[4156527]: 2025-01-03 20:54:16.171975048 +0100 CET m=+0.010385887 container died 2020ba04292554b1a5dce102011e18dbe35f5739813d4a4f9d2a80a17f1cf59f (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file)
Jan 03 20:54:16 archlinux podman[4156527]: 2025-01-03 20:54:16.186457744 +0100 CET m=+0.024868573 container cleanup 2020ba04292554b1a5dce102011e18dbe35f5739813d4a4f9d2a80a17f1cf59f (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65)
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.212080179 +0100 CET m=+0.283801218 container remove 2020ba04292554b1a5dce102011e18dbe35f5739813d4a4f9d2a80a17f1cf59f (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65)
Jan 03 20:54:16 archlinux systemd[1225]: Started libpod-conmon-f0ee1b2171297f7d9324f2e66d5755500ca44cc45fe8405461639752437e36ea.scope.
Jan 03 20:54:16 archlinux systemd[1225]: Started libcrun container.
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.266620857 +0100 CET m=+0.338341896 container init f0ee1b2171297f7d9324f2e66d5755500ca44cc45fe8405461639752437e36ea (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-use-test-file, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65)
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.268472193 +0100 CET m=+0.340193232 container start f0ee1b2171297f7d9324f2e66d5755500ca44cc45fe8405461639752437e36ea (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-use-test-file, pod_id=56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65)
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: + mount
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: + grep /app
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: tmpfs on /app type tmpfs (rw,nosuid,nodev,relatime,uid=1000,gid=1000,inode64)
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: + ls -al /app
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: total 0
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: drwxrwxrwt 2 root root 40 Jan 3 19:54 .
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: dr-xr-xr-x 1 root root 120 Jan 3 19:54 ..
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: + cat /app/x-test-file
Jan 03 20:54:16 archlinux podman[4156451]: 2025-01-03 20:54:16.271281479 +0100 CET m=+0.343002508 pod start 56135e9accd8dd8109d2f51f83d300faaf105e98e47bcc11fccd37d97e9c4a65 (image=, name=podman-emptydir-bug)
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: cat: can't open '/app/x-test-file': No such file or directory
Jan 03 20:54:16 archlinux podman-emptydir-bug-use-test-file[4156548]: + sleep 60
Describe the results you expected
Expected output, with emptyDir: { medium: '' }:
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.13607482 +0100 CET m=+0.077134571 container create bb627fad0239303d85d42bb8e262d4a51e8bacff6d3772ff2a11ec31be577838 (image=localhost/podman-pause:5.3.1-1732225906, name=2d5d6ed99bf3-infra, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e, io.buildah.version=1.38.0)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.138839992 +0100 CET m=+0.079899753 pod create 2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e (image=, name=podman-emptydir-bug)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.157532745 +0100 CET m=+0.098592496 volume create podman-emptydir-bug-temp
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.160466087 +0100 CET m=+0.101525838 container create 2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.140010425 +0100 CET m=+0.081070196 image pull 4048db5d36726e313ab8f7ffccf2362a34cba69e4cdd49119713483a68641fce docker.io/library/alpine:3.21.0
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.161582226 +0100 CET m=+0.102641987 image pull 4048db5d36726e313ab8f7ffccf2362a34cba69e4cdd49119713483a68641fce docker.io/library/alpine:3.21.0
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.177721019 +0100 CET m=+0.118780770 container create 463e469dbd52cd8a87c7d23de7340ce3f774e34be991196c6734e3bafe9aff4e (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-use-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.178085321 +0100 CET m=+0.119145092 container restart bb627fad0239303d85d42bb8e262d4a51e8bacff6d3772ff2a11ec31be577838 (image=localhost/podman-pause:5.3.1-1732225906, name=2d5d6ed99bf3-infra, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e, io.buildah.version=1.38.0)
Jan 03 20:58:44 archlinux systemd[1225]: Started rootless-netns-83dfbc55.scope.
Jan 03 20:58:44 archlinux kernel: podman1: port 1(veth0) entered blocking state
Jan 03 20:58:44 archlinux kernel: podman1: port 1(veth0) entered disabled state
Jan 03 20:58:44 archlinux kernel: veth0: entered allmulticast mode
Jan 03 20:58:44 archlinux kernel: veth0: entered promiscuous mode
Jan 03 20:58:44 archlinux kernel: podman1: port 1(veth0) entered blocking state
Jan 03 20:58:44 archlinux kernel: podman1: port 1(veth0) entered forwarding state
Jan 03 20:58:44 archlinux systemd[1225]: Started [systemd-run] /usr/lib/podman/aardvark-dns --config /run/user/1000/containers/networks/aardvark-dns -p 53 run.
Jan 03 20:58:44 archlinux systemd[1225]: Started libpod-conmon-bb627fad0239303d85d42bb8e262d4a51e8bacff6d3772ff2a11ec31be577838.scope.
Jan 03 20:58:44 archlinux systemd[1225]: Started libcrun container.
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.210477332 +0100 CET m=+0.151537093 container init bb627fad0239303d85d42bb8e262d4a51e8bacff6d3772ff2a11ec31be577838 (image=localhost/podman-pause:5.3.1-1732225906, name=2d5d6ed99bf3-infra, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e, io.buildah.version=1.38.0)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.211915193 +0100 CET m=+0.152974954 container start bb627fad0239303d85d42bb8e262d4a51e8bacff6d3772ff2a11ec31be577838 (image=localhost/podman-pause:5.3.1-1732225906, name=2d5d6ed99bf3-infra, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e, io.buildah.version=1.38.0)
Jan 03 20:58:44 archlinux systemd[1225]: Started libpod-conmon-2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c.scope.
Jan 03 20:58:44 archlinux systemd[1225]: Started libcrun container.
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.23278965 +0100 CET m=+0.173849401 container init 2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.234262618 +0100 CET m=+0.175322379 container start 2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: + mount
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: + grep /app
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: /dev/mapper/archlinux_home on /app type btrfs (rw,nosuid,nodev,relatime,ssd,space_cache=v2,subvolid=5,subvol=/)
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: + echo hello
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: + ls -al /app
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: total 4
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: drwxr-xr-x 1 root root 22 Jan 3 19:58 .
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: dr-xr-xr-x 1 root root 120 Jan 3 19:58 ..
Jan 03 20:58:44 archlinux podman-emptydir-bug-create-test-file[4166609]: -rw-r--r-- 1 root root 6 Jan 3 19:58 x-test-file
Jan 03 20:58:44 archlinux podman[4166615]: 2025-01-03 20:58:44.251931605 +0100 CET m=+0.010516534 container died 2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file)
Jan 03 20:58:44 archlinux podman[4166615]: 2025-01-03 20:58:44.268677892 +0100 CET m=+0.027262821 container cleanup 2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.293833299 +0100 CET m=+0.234893060 container remove 2755165d3f341b741a7f13259e55935bf4727535d8ac520ec6d37ee15d3f623c (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-create-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux systemd[1225]: Started libpod-conmon-463e469dbd52cd8a87c7d23de7340ce3f774e34be991196c6734e3bafe9aff4e.scope.
Jan 03 20:58:44 archlinux systemd[1225]: Started libcrun container.
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.36544473 +0100 CET m=+0.306504491 container init 463e469dbd52cd8a87c7d23de7340ce3f774e34be991196c6734e3bafe9aff4e (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-use-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.367158635 +0100 CET m=+0.308218386 container start 463e469dbd52cd8a87c7d23de7340ce3f774e34be991196c6734e3bafe9aff4e (image=docker.io/library/alpine:3.21.0, name=podman-emptydir-bug-use-test-file, pod_id=2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e)
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: + mount
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: + grep /app
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: /dev/mapper/archlinux_home on /app type btrfs (rw,nosuid,nodev,relatime,ssd,space_cache=v2,subvolid=5,subvol=/)
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: + ls -al /app
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: total 4
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: drwxr-xr-x 1 root root 22 Jan 3 19:58 .
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: dr-xr-xr-x 1 root root 120 Jan 3 19:58 ..
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: -rw-r--r-- 1 root root 6 Jan 3 19:58 x-test-file
Jan 03 20:58:44 archlinux podman[4166542]: 2025-01-03 20:58:44.369991235 +0100 CET m=+0.311050986 pod start 2d5d6ed99bf346ff565a6eb77d3001c73ef8f87f2b0cb1ea17b54bd424e5c22e (image=, name=podman-emptydir-bug)
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: + cat /app/x-test-file
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: hello
Jan 03 20:58:44 archlinux podman-emptydir-bug-use-test-file[4166640]: + sleep 60
podman info output
host:
arch: amd64
buildahVersion: 1.38.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- rdma
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-1:2.1.12-1
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: e8896631295ccb0bfdda4284f1751be19b483264'
cpuUtilization:
idlePercent: 96.13
systemPercent: 1.39
userPercent: 2.48
cpus: 16
databaseBackend: boltdb
distribution:
distribution: arch
version: unknown
eventLogger: journald
freeLocks: 1969
hostname: archlinux
idMappings:
gidmap: null
uidmap: null
kernel: 6.12.6-arch1-1
linkmode: dynamic
logDriver: journald
memFree: 6445858816
memTotal: 67148660736
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.13.1-1
path: /usr/lib/podman/aardvark-dns
version: aardvark-dns 1.13.1
package: netavark-1.13.1-1
path: /usr/lib/podman/netavark
version: netavark 1.13.1
ociRuntime:
name: crun
package: crun-1.19.1-2
path: /usr/bin/crun
version: |-
crun version 1.19.1
commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
rundir: /run/user/0/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-2024_11_27.c0fbc7e-1
version: |
pasta 2024_11_27.c0fbc7e
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.3.1-1
version: |-
slirp4netns version 1.3.1
commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
libslirp: 4.8.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 32902987776
swapTotal: 34357637120
uptime: 194h 5m 8.00s (Approximately 8.08 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 40
paused: 0
running: 1
stopped: 39
graphDriverName: btrfs
graphOptions: {}
graphRoot: /var/lib/containers/storage
graphRootAllocated: 274875809792
graphRootUsed: 155973042176
graphStatus:
Build Version: Btrfs v6.11
Library Version: "104"
imageCopyTmpDir: /var/tmp
imageStore:
number: 50
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 5.3.1
Built: 1732225906
BuiltTime: Thu Nov 21 22:51:46 2024
GitCommit: 4cbdfde5d862dcdbe450c0f1d76ad75360f67a3c
GoVersion: go1.23.3
Os: linux
OsArch: linux/amd64
Version: 5.3.1
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
Additional information
I selected 'Privileged' above, but running privileged/rootful vs. rootless doesn't make any difference.
FWIW, the Kubernetes volume API is reasonably clear about how this is expected to work:
For a Pod that defines an emptyDir volume, the volume is created when the Pod is assigned to a node. As the name says, the emptyDir volume is initially empty. All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently.
See https://kubernetes.io/docs/concepts/storage/volumes/#emptydir.
I am also running into this issue. Would be a great QoL fix!
d3281cf887495348a1d208c5a5d0d8650af84a23 added functional support for medium: 'Memory', but I only see it creating a spec.Mount in ./pkg/specgen/generate/kube/kube.go:ToSpecGen().
Conjecture: perhaps these need to reference Volumes like the other cases in this switch to have multiple containers reference the same volume..?
After looking at this I think all of the tmpfs usage in podman at this point is a direct mount of a new tmpfs into a location, which would make a unique tmpfs every mount.
And indeed that is the only thing you can ask podman to do with a tmpfs directly - it just does not match what you can ask kubernetes to do with a tmpfs with this memory backed emptydir.
So, it seems that just for podman play kube, we will need to create a named volume but with the additional lifecycle of being prepared with a tmpfs mount first and the unmounting of that tmpfs when the volume is removed.
Hello!
After fighting with my Kube config for a week trying to understand why my tmpfs directories weren't being shared I found this issue. Is there any chance this might get looked at? My use case is secret sharing between containers, where disk-backed directories are not an option.
Tried using volumes from as a workaround but it shows the same behavior.
It seems like @iluminae had an idea on how this could be tackled.
Thanks!
cc: @umohnani8 @rhatdan @andremarianiello