Running from inside a container (container-in-container)
Is your feature request related to a problem? Please describe. I like to run everything from inside containers and not pollute my local system with anything, simple pruning will clean up everything. I'd like to setup supabase cli in a container as well for this reason. Currently the way to get there is not container-in-container friendly, at least with podman.
Describe the solution you'd like
Ideally I'd like to just install supabase cli in the container, call supabase start and have it working.
Context
My setup is - JetBrains IDE as remote session into container created via devcontainer.json and all that.
So first thing to do is supabase irrelevant - I need to bind /run/user/*id*/podman/podman.sock to /var/run/docker.sock so we can act like there is docker running.
This is sadly not enough, along with uid/gid remappings etc. you need to do some chrgrp/chmod stuff to make permissions happy if you are not running container as root.
In this state, if you run supabase start you'll get first error:
failed to connect to postgres: failed to connect to `host=127.0.0.1 user=postgres database=postgres`: dial error (dial tcp 127.0.0.1:54322: connect: connection refused)
Which is to be expected as we should be using either host.docker.internal or host.containers.internal, but there is no way to specify different hostname. I went through the code to find this where Docker.DaemonHost() in my case returns unix:///var/run/docker.sock so off we go to the default 127.0.0.1 not even localhost which can be problematic for ipv6 btw, and could also be hacked around with /etc/hosts change to map localhost->host.docker.internal/host.containers.internal.
So my first request would be to allow for configurable hostname. When I built my own binary that returns host.containers.internal all is well (as expected). You can achieve the same result by having
"runArgs": [
"--network=host"
],
in your devcontainer.json but that's not really recommended.
So I'd like this to be a bit of discussion if there is a problem with allowing user to change the hostname or not.
If not I'd like to propose a new config argument for config.toml.
That's about connecting things, but sadly for some reason, with the hostname changed the command supabase start still won't work because vector container cannot be created:
/home/myuser/supabase_cli/internal/utils/docker.go:314 (0x1251890)
/home/myuser/supabase_cli/internal/start/start.go:299 (0x1546d97)
/home/myuser/supabase_cli/internal/start/start.go:62 (0x1544b45)
/home/myuser/supabase_cli/cmd/start.go:50 (0x1561310)
/home/myuser/go/pkg/mod/github.com/spf13/[email protected]/command.go:1015 (0x61dbea)
/home/myuser/go/pkg/mod/github.com/spf13/[email protected]/command.go:1148 (0x61e52f)
/home/myuser/go/pkg/mod/github.com/spf13/[email protected]/command.go:1071 (0x1568de9)
/home/myuser/supabase_cli/cmd/root.go:140 (0x1568dde)
/home/myuser/supabase_cli/main.go:11 (0x156c30f)
/usr/local/go/src/internal/runtime/atomic/types.go:194 (0x44384b)
/usr/local/go/src/runtime/asm_amd64.s:1700 (0x47ea21)
failed to create docker container: Error response from daemon: container create: statfs /var/run/docker.sock: permission denied
So running without vector with supabase start -x vector solves this, but I haven't been able to figure out why only this one is problematic as all the other containers are being created just fine. I tried to chmod 666 the socket so any user can read/write in case idk how this one was being created with some weird permissions but didn't help.
Also I think adding containerName to this error message is really helpful (that's how I figured out which container is causing this).
So to sum up: Configurable hostname setting for the rest of us. Printing container name in case it failed to be created. No idea what is going on with vector.
I don't know if the issue is happening across other OS and with docker at all (Linux Mint + podman here).
I'm all in for making PR for this, but wanted to ask around first and maybe have some pointers as the GetHostname function lives in different file (still utils module though), but my go skill is few lines of code here and there so I'm not that invested yet.
Even if all of this is setup though, the sole reason I was going through this was to test edge functions which is not really possible because first - edge-runtime is working with mounted volume which does not exist on host, so if you are e.g. in /home/vscode/project folder path in container and run supabase functions serve it will try to mount /home/vscode/project/supabase/functions folder into the edge runtime container but that does not exist so one workaround is to create it and then everyone is happy, but even with this hack kong seems unable to connect to the edge-runtime container with this error message
HTTP/1.1 502 Bad Gateway
Date: Fri, 21 Nov 2025 19:41:04 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Content-Length: 75
Access-Control-Allow-Origin: *
X-Kong-Upstream-Latency: 3071
X-Kong-Proxy-Latency: 15396
Via: kong/2.8.1
{
"message":"An invalid response was received from the upstream server"
}
I created temporary container and added it to the same docker network - e.g. supabase_network_*project* and based on some files I figured that kong is connecting via container name so when I tried curl to http:://supabase_edge_runtime_supa:8081/func_name it worked so there is some issue between kong and container visibility and i'm out of ideas.
This version was with --network-host as with my custom binary even this did not work with error being unable to create workers or something like that - BOOT_ERROR etc...
So obviously there is a bit more to it than simple host name change, I have to go back to having repo locally, mount the volume into container and running supabase locally as well - even if I somehow solved the kong bad gateway error, duplicating the paths, or forcing container workdir to be something so the host is happy is too much and since there isn't a way how to mount volume of a container into another container it all comes crashing down. Ultimately the solution would be to copy the files into the edge-runtime container instead of mounting, but that would work only with oneshot policy I guess.
Also I thought about the vector issue, and the error it was giving was permission denied for docker.sock socket and since I'm not using docker and podman desktop is having some issues with this part of docker compatibility, something somewhere during the creation is probably not getting the mount point from devcontainer.json and since that socket does not exist system wide, it just fails.
I guess that's it. I'll leave this open for a bit, but will close it soon if there won't be any discussion happening.
Ok so, the bad gateway issue is happening only with podman and ONLY when calling supabase functions serve. If I try to test right after supabase start then it works, but calling serve screws something up.