Publishing of ports does not work on container restart. (`msg="conflict with ID 7"`)
Description
When a container is restarted it does not publish a container's port(s) to the host.
Steps to reproduce the issue
- In Terminal run
nerdctl run -it -p 3003:80 nginxdemos/hello - In another terminal tab run
curl http://localhost:3003/. This returns some HTML. - Stop the running container by pressing CTRL+C.
- Start the container again by running
nerdctl run -it -p 3003:80 nginxdemos/hello. This will result in an error
FATA[0000] failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: time="2023-01-17T22:47:38Z" level=fatal msg="conflict with ID 7"
Failed to write to log, write /home/siravara.linux/.local/share/nerdctl/1935db59/containers/default/7e9818b7fad3a676ce9fcca0e9da5aca0bb5addc46f677443778973521112ce0/oci-hook.createRuntime.log: file already closed: unknown
- Restart the container again. This time there are no errors.
- Run
curl http://localhost:3003/again. This time the request does not reach the container. Error messagecurl: (56) Recv failure: Connection reset by peer.
Describe the results you received and expected
nerdctl shouldn't return an error on step 4 and the request should be correctly handled in step 5.
What version of nerdctl are you using?
nerdctl version 1.1.0
Are you using a variant of nerdctl? (e.g., Rancher Desktop)
Others
Host information
Client: Namespace: default Debug Mode: false
Server: Server Version: v1.6.12 Storage Driver: overlayfs Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Log: fluentd journald json-file syslog Storage: native overlayfs fuse-overlayfs stargz Security Options: apparmor seccomp Profile: default cgroupns rootless Kernel Version: 5.19.0-26-generic Operating System: Ubuntu 22.10 OSType: linux Architecture: aarch64 CPUs: 4 Total Memory: 3.813GiB Name: lima-default ID: 6eeecdb0-abfa-4d0b-b8e2-d67487f7ee6a
WARNING: AppArmor profile "nerdctl-default" is not loaded. Use 'sudo nerdctl apparmor load' if you prefer to use AppArmor with rootless mode. This warning is negligible if you do not intend to use AppArmor. WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled
This was reported by a finch user. https://github.com/runfinch/finch/issues/164
Stop the running container by pressing CTRL+C.
The container has to be removed, not just stopped
Stop the running container by pressing CTRL+C.
The container has to be removed, not just stopped
@AkihiroSuda you think that the new running container is conflicting with an existing container ID ?
In rootful mode, I don't get FATA[0000] failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: time="2023-01-17T22:47:38Z" level=fatal msg="conflict with ID 7" Failed to write to log, write /home/siravara.linux/.local/share/nerdctl/1935db59/containers/default/7e9818b7fad3a676ce9fcca0e9da5aca0bb5addc46f677443778973521112ce0/oci-hook.createRuntime.log: file already closed: unknown . However I still can't connect to the service after running it again.
This is definitely a bug, and it still exists!
Networking in nerdctl is still not stable and bridged networking still breaks when container is stopped.
When a running container is started for the very first time, any ports exposed/published are reachable. However, after the container is stopped using Ctrl+C, networking breaks and the container can no longer be accessed.
The first attempt to run the container results in the following error output:
FATA[0000] failed to create shim task: OCI runtime create failed: runc create failed:
unable to start container process: error during container init: error running hook #0:
error running hook: exit status 1, stdout: , stderr: time="2023-10-16T22:17:57+02:00"
level=fatal msg="failed to expose ports in rootless mode: conflict with ID 21"
Failed to write to log, write /home/user/.local/share/nerdctl/1935db59/containers/default/6348ffef686bf6c7f964fa6c74be52243cb99a2f270d48f2e032accf976e4237/oci-hook.createRuntime.log: file already closed: unknown
Any subsequent attempts will sometimes fail, but will eventually start the container. But, that container will still not be accessible through the published ports.
The only way I get things on track again is to prune the system.
This looks like a bug in the network management, whereby used ports are not cleared once they no longer are in use.
+1 on this. Stopping and starting the same container shouldn't require a vm stop && start to be able to start on the same port.