bug in routing https traffic in case of multiple copies of same docker
We are using your solution for example on one server with 25 different domains. There are copies of the same docker image being used acrcoss different domains for technical reasons, as follows:
under https-portal:
battery.x.y -> x-datafeed-battery:80 #production
and then
x-datafeed-battery:
container_name: monitoring_battery
image: registry.bla.bla:5001/image_x:latest
The other copies of this docker image are NOT being routed with https. They are run as a part of the same compose to get them online and they run cron jobs. But they are not referenced in https-portal.
x-datafeed:
container_name: monitoring_datafeed
image: registry.bla.bla:5001/image_x:latest
For some reason, traffic that is routed to battery.x.y is seen in logs and activity of x-datafeed and other ones like it.
Where would we begin to debug this? Is it somehow relevant that the same docker image is used multiple times?
I believe on one compose, only ONE container can listen to a certain port. I'm sure HTTPS-PORTAL needs to listen to 80 and 443. I would have a look which ports each container is listening to.
This bug now comes up ever more frequently, now even towards containers that don't have the same image. For example, frontend web service x is renewed and the URL externally is sent towards y, some backend service.
I'll try to see if I can get more debug info out
I would look at which container is actually listening to your port 80 and port 443 on your host machine.
Below are the 80/443 port users on the machine: it's docker
root@debian11-dockercompose:~# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1036114/docker-prox
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3076385/sshd: /usr/
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1036093/docker-prox
tcp 0 0 0.0.0.0:30020 0.0.0.0:* LISTEN 1026028/docker-prox
...
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1026138/docker-prox
tcp6 0 0 :::443 :::* LISTEN 1036145/docker-prox
tcp6 0 0 :::22 :::* LISTEN 3076385/sshd: /usr/
tcp6 0 0 :::80 :::* LISTEN 1036099/docker-prox
tcp6 0 0 :::2377 :::* LISTEN 665/dockerd
tcp6 0 0 :::30020 :::* LISTEN 1026035/docker-prox
tcp6 0 0 :::30021 :::* LISTEN 1026054/docker-prox
tcp6 0 0 :::6379 :::* LISTEN 1036078/docker-prox
tcp6 0 0 :::40200 :::* LISTEN 1029657/docker-prox
...
tcp6 0 0 :::40055 :::* LISTEN 1027358/docker-prox
tcp6 0 0 :::8080 :::* LISTEN 1026150/docker-prox
tcp6 0 0 :::7946 :::* LISTEN 665/dockerd
udp 0 0 0.0.0.0:4789 0.0.0.0:* -
udp6 0 0 :::7946 :::* 665/dockerd
There's 47 containers, most of which unique, running and being served a certificate by https-portal
When hitting docker compose up -d the containers come online and they are provided with a certificate by https-portal. This routing bug does not occur when going from 0 to 47.
In practice we update services and their associated container regularly. So some services will be replaced. In this case the code is
docker login -u xxxx -p ${{secrets.SSH_PASS}} {{env.REGISTRY}}
docker pull ${{env.REGISTRY}}/${{env.CONTAINER_NAME}}
docker compose up -d --remove-orphans --build
Doing this what happens is the updated container is loaded, steveltn/https-portal realises it needs to reload and life goes on. Often this process works just fine, but after a few dozen rounds of having containers replaced one by one we noticed the reverse proxy in some cases will now forward the traffic for DNS name x to the container pertaining to DNS name y.
Only https-portal is listening to 80 and 443, it will not even accept to start if other containers would have that port open. Also it correctly renews certificates via the letsencrypt process with 80 and 443.