Docker for Mac 4.1.1 accessing the proxy server from within a container hangs
Using the localhost hack (dnsmasq resolving .docker hosts to 172.17.0.1, and aliasing lo0 to 172.17.0.1) access to the proxy server from within the containers no longer possible when using DfM 4.1.1. The HTTP request just hangs.
I have disabled V2 compose as that was wreaking other kinds of havoc (thank you, Docker!).
This is the proxy server. I have tested a few scenarios with 4.0.1 and 4.1.1, here are the results:
- dory is up, 4.0.1,
dory attach proxyoutput is monitored by me- wget http://127.0.0.1 yields local server's webpage - correct, monitor shows no access
- wget http://otherservice.docker yields other service's webpage - correct, monitor shows access
- wget http://172.17.0.1 yields otherservice's webpage - correct as it was the first service, monitor shows access
- accessing http://otherservice.docker from desktop browser yields other service's webpage - correct, monitor shows access
- dory is down, 4.0.1
- wget http://127.0.0.1 yields local server's webpage - correct
- wget http://otherservice.docker cannot resolve host name - correct
- wget http://172.17.0.1 connection refused - correct
- accessing http://otherservice.docker from desktop browser cannot resolve host name - correct
- dory is up, 4.1.1,
dory attach proxyoutput is monitored by me- wget http://127.0.0.1 yields local server's webpage - correct, monitor shows no access
- wget http://otherservice.docker hangs - incorrect, monitor shows no access
- wget http://172.17.0.1 hangs - incorrect, monitor shows no access
- accessing http://otherservice.docker from desktop browser yields other service's webpage - correct, monitor shows access
- dory is down, 4.1.1
- wget http://127.0.0.1 yields local server's webpage - correct
- wget http://otherservice.docker cannot resolve host name - correct
- wget http://172.17.0.1 connection refused - correct
- accessing http://otherservice.docker from desktop browser cannot resolve host name - correct
At first I was suspecting some kind of network filtering to be the issue. However, the 4.1.1 dory down trials look like there is no magic filtering going on, at least on the surface level. I don't know if dory's logging level can somehow be raised to see if the request even makes it to the container.
configuration file:
---
dory:
dnsmasq:
enabled: true
domains:
- domain: docker
address: 172.17.0.1
container_name: dory_dnsmasq
port: 53
kill_others: ask
service_start_delay: 5
nginx_proxy:
enabled: true
https_enabled: true
cors_enabled: true
resolv:
enabled: true
nameserver: 127.0.0.1
port: 53
We have been experiencing the same issues. We have been trying to fix it by building our own proxy and dnsmasq but with no luck. It's hard for me to understand if this issue lies with the proxy image or the dnsmasq image. So far, our efforts have been focused on the proxy.
The dnsmasq is fine, its only job is to capture any .docker resolutions and reply with the preset IP address. What I'm pretty certain is that the request gets out of the container and arrives at the proxy as intended, but there something goes wrong. Either the proxy has a problem identifying the target container, and the request gets stalled at the pre-proxy phase, or the request goes out, but due to some new networking rules the target container blocks the proxy's ingress.
I've tried to follow the chain of dockerfiles dory's proxy is built on, but I gave up after three projects.
There might be merit in inspecting this: https://github.com/nginx-proxy/nginx-proxy . There is a chapter about only connecting to bridge networks unless specified otherwise. Maybe 4.1.1 changed something in this regard?
Our issue on this ended up being around a seemingly breaking change with how docker-compose handles name resolution from container to container. I had multiple groups of containers running on the same network from different compose files but using the same internal name references for other containers. I had to change the containers to use unique names when referencing containers from their compose file.
Just to add what I've discovered here, on Docker Desktop 4.6:
With a configuration identical to the one shown above, and monitoring port 80 inside of the proxy container with tcpdump, I get the following results:
Host machine (macOS) curling/browser GETing service.docker:
- service.docker resolves to loopback IP (172.17.0.1)
- tcpdump in the proxy container shows inbound/outbound packets
- request returns response from target service
Standalone docker container that is part of the default docker network curling service.docker:
- service.docker resolves to loopback IP (172.17.0.1)
- tcpdump in the proxy container shows inbound/outbound packets
- request returns response from target service
Docker container that is part of a docker compose stack + network curling service.docker:
- service.docker resolves to loopback IP (172.17.0.1)
- tcpdump in the proxy container shows no activity
- request hangs until it times out
To me, this appears to show that:
- The issue is isolated to containers attempting to connect from docker compose stacks
- The issue isn't with DNS, or the proxy, but rather with inbound traffic to the proxy container (it looks like it isn't even getting to nginx at all)
The solution to our blues is... not use the private IP address range that Docker Compose might recognize as a Compose network address. The Compose network separation got tighter in Compose V2 and that caused the issue. Choose an address from the 192.168.x.x range or the 10.x.x.x range for your localhost alias address and Dory should be working fine again.