Podman not working
What happened?
I have a project with a devcontainer. It uses a docker-compose.yml and a Dockerfile for the service.
I tried to start this devcontainer using Podman, but everytime I get an error when trying to get the image config remotely. This seems wrong, because the image does not even exists remotely, because Podman just built it locally. This problem does not occur when using Docker instead of Podman.
I get the following error:
devcontainer up: start container: inspect image: get image config remotely: retrieve image default-ti-3b7a1-web: GET https://index.docker.io/v2/library/default-ti-3b7a1-web/manifests/latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/default-ti-3b7a1-web Type:repository]]
What did you expect to happen instead?
I expect Podman to be a drop-in replacement and work just like Docker does.
How can we reproduce the bug? (as minimally and precisely as possible)
My devcontainer.json:
{
"name": "Web app",
"dockerComposeFile": ["docker-compose.yml"],
"service": "web",
"workspaceFolder": "/home/node/workspace/"
}
My docker-compose.yml:
services:
web:
build: .
My Dockerfile:
FROM node:22.14.0-alpine3.21
RUN apk add sudo git
RUN npm i -g [email protected]
ENTRYPOINT [ "tail", "-f", "/dev/null" ]
Local Environment:
- DevPod Version: v0.7.0-alpha.30
- Operating System: linux
- ARCH of the OS: AMD64
v0.6.15 has the same problem.
Related to #1665.
Switching to docker-compose works fine, it seems to be because devpod is treating podman-compose as a compatible implementation of docker-compose, and failing to fall back to a strange behavior that triggers more errors.
I'm seeing a similar issue using podman,
12:26:45 info [3/3] COMMIT defaultrid0d2a_devcontainer
12:26:45 info --> 1a8e08535bce
12:26:45 info Successfully tagged localhost/defaultrid0d2a_devcontainer:latest
12:26:46 info 1a8e08535bce4cd5db1bfa7805062bc9725eb196f51eb06dd4dd39f83ae5161a
12:26:46 info exit code: 0
12:26:47 info GET https://index.docker.io/v2/library/defaultrid0d2a-devcontainer/manifests/latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/defaultrid0d2a-devcontainer Type:repository]]
12:26:47 info retrieve image defaultrid0d2a-devcontainer
12:26:47 info github.com/loft-sh/devpod/pkg/image.GetImage
12:26:47 info /home/runner/work/devpod/devpod/pkg/image/image.go:34
12:26:47 info github.com/loft-sh/devpod/pkg/image.GetImageConfig
12:26:47 info /home/runner/work/devpod/devpod/pkg/image/image.go:79
12:26:47 info github.com/loft-sh/devpod/pkg/docker.(*DockerHelper).InspectImage
12:26:47 info /home/runner/work/devpod/devpod/pkg/docker/helper.go:200
where the build image is tagged slightly different from the one to be run ("-" vs "_").
If I podman tag localhost/defaultrid0d2a_devcontainer:latest localhost/defaultrid0d2a-devcontainer:latest
it's found and tried to spin up correctly.
Funny enough, the tags are kept in sync afterwards, if the log is correct:
[3/3] COMMIT defaultrid0d2a_devcontainer
12:31:48 info --> 1a8e08535bce
12:31:48 info Successfully tagged localhost/defaultrid0d2a_devcontainer:latest
12:31:48 info Successfully tagged localhost/defaultrid0d2a-devcontainer:latest
12:31:48 info 1a8e08535bce4cd5db1bfa7805062bc9725eb196f51eb06dd4dd39f83ae5161a
12:31:48 info exit code: 0
It seems that the issue is 3 months old. How is it going using podman as a devcontainer?
I managed to get flatpak'd devpod (0.7.0-130) + podman + multicontainer devcontainer setups working on Bluefin. Hoping this might be useful to anybody reading.
Disclaimer: This is from the perspective of how Bluefin does things, so some things may or may not require tweaking based on your OS situation.
The things I needed to do were:
- Install
docker-compose(NOTpodman-compose). On Bluefin, this is done viabrew. - Enable podman socket:
systemctl --user enable --now podman.socket - Expose environment variables to systemd:
-
~/.config/environment.d/brew.path.confshould containPATH=/var/home/linuxbrew/.linuxbrew/bin:$PATH. This is required for Bluefin users in order to expose brew-installeddocker-composeto the devpod flatpak (as well as brew-cask-installedvscodium.) You may or may not need to do something similar. -
~/.config/environment.d/docker.host.confshould containDOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock. - reboot
-
- Add
xdg-run/podmanpermission to the devpod flatpak - Launch devpod, add a docker provider, modify the Docker Path to
podmanand Docker Host tounix:///run/user/1000/podman/podman.sock(or whatever your UID is. Unfortunately, it appears that this path must be a static string.) - Your
docker-compose.ymlservice should look something like:
services:
mysql:
image: mysql:8.0
restart: unless-stopped
volumes:
- mysql-data:/var/lib/mysql
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: true
MYSQL_DATABASE: test
blahblahapp:
image: mcr.microsoft.com/devcontainers/base:debian
command: sleep infinity
userns_mode: keep-id
volumes:
- ../..:/workspaces:Z
network_mode: service:mysql
volumes:
mysql-data:
-
devcontainer.jsonshould look something like:
{
"name": "Blah App",
"dockerComposeFile": "docker-compose.yml",
"service": "blahblahapp",
"shutdownAction": "none",
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"features": {},
"customizations": {}
}
I don't know what best-practices are, but:
- It appears that setting
DOCKER_HOSTat the systemd environment level is required because when devpod tries to build w/ docker-compose, docker-compose is trying to connect toDOCKER_HOSTwhich will default to the default docker socket path, which of course doesn't exist. - Setting Docker Path in the docker provider settings seems to be required because when you try to stop the workspace, devpod tries to connect to the docker socket defined by this property rather than by the
DOCKER_HOSTenv var. - As you can tell, there's some inconsistency regarding when and where devpod uses which variables and settings for the docker socket. (Note: flatpak env vars don't seem to affect any of this nor does the devpod "Experimental Additional Environment Variables" setting.)
-
command: sleep infinityis required to keep the container running. Not sure if there's a more proper way to do this -
userns_mode: keep-idis required on the container(s) mounting the workspace to prevent screwing up the UID/GID on the actual host directories/files - Append workspace mounts with
:Zor:zwhere appropriate. Documentation here. - If you're building a devcontainer from scratch from a non-devcontainer image, it's best to create a non-root user
vscodewith UID/GID1000:1000. IIRC, if you stick withroot, you'll end up with messed up UID/GIDs on mounted directories.- https://github.com/devcontainers/features/tree/main/src/common-utils
edit: I haven't gotten around to testing port forwarding, but it shooould be trivial?