ECS: Allow toggling AssignPublicIp
/!\ Docker Compose V2 has moved to github.com/docker/compose, this repository is for "Cloud Integrations". You can report issues related to docker compose here.
Description
Redo of #2135. All tasks are assigned a public IP. Combined with #1783, this creates a bit of a security gap. Assume the following:
services:
caddy:
image: caddy
....
ports:
- 80
- 443
networks:
- backend
sensitive_backend:
image: python
...
networks:
- backend
networks:
- backend: {}
While nginx is open to the public (by publishing ports and causing a LB to be attached), we don't want sensitive_backend to be exposed.
However, they're both assigned a public IP and being joined to this secgroup:

This effectively allows public access to the container. The steps that IMO should be taken are:
- Make public IPs opt-in (this)
- Redo the security group assignment (#1783)
Steps to reproduce the issue: See compose file above
Describe the results you received: All services are assigned a public IP address
Describe the results you expected: Only services asking for a public IP (if any) should be assigned one
Additional information you deem important (e.g. issue happens only occasionally): I'm not sure any service should have a public IP considering access should be done via LBs, but it's cheap to allow an optin.
Output of docker-compose --version:
(paste your output here)
Output of docker version:
Docker version 20.10.22, build 3a2c30b63a
Output of docker context show:
You can also run docker context inspect context-name to give us more details but don't forget to remove sensitive content.
{
"Name": "nitz-ecs",
"Metadata": {
"Type": "ecs"
},
"Endpoints": {
"docker": {
"SkipTLSVerify": false
},
"ecs": {
"Profile": "nitz"
}
},
"TLSMaterial": {},
"Storage": {
"MetadataPath": "/home/nitz/.docker/contexts/STUFF",
"TLSPath": "/home/nitz/.docker/contexts/tls/STUFF"
}
}
]
Output of docker info:
Client:
Context: default
Debug Mode: false
Plugins:
compose: Docker Compose (Docker Inc., 2.14.2)
Server:
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 149
Server Version: 20.10.22
Storage Driver: btrfs
Build Version: Btrfs v6.0.2
Library Version: 102
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9ba4b250366a5ddde94bb7c9d1def331423aa323.m
runc version:
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 6.1.4-arch1-1
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 31.07GiB
Name: pluto
ID: PQMU:DGSD:ZOWK:BP5Q:JH5Y:35ZT:3OV4:SRAR:IFNQ:MAKE:FLCG:WSHF
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details (AWS ECS, Azure ACI, local, etc.): AWS ECS
This is a massive security flaw. It makes the Cloud Integration on AWS unusable for production use for the common scenarios (any architecture where you have some services which are not public facing, hard to imagine a scenario that would not have that.
Thanks for validating my findings on this @BackSlasher .
To anyone looking at this issue in the repo, I can tell you that this tool set seems to have been abandoned by Docker as of 2023. This is only one of many deal breaking issues that have come up in the last six months with no reply from the maintainers. (You will see a guy post about his own tool that does the same thing, that he suggests as an alternative. But the maintainers have been in radio silence for some time). I wish I had know this was going to happen when I picked this tool in mid 2022.
Ha @henry-hc
You are a few minutes early on me. Is that annoying to suggest an alternative? In case anyways, here is how you'd configure it with that other tool.
Good luck,
And yes, shame not to have any follow up on any of this most of the time. Hence why I continued my dev work :shrug:
I think that #2215 would solve this rather nicely. Couldn't get a review though :(