docker volume rm --force does not let you removed a local volume from a stopped container
- [x] This is a bug report
- [ ] This is a feature request
- [x] I searched existing issues before opening this one
Expected behavior
docker volume rm --force lets you remove local volumes that are in use by stopped containers
Actual behavior
docker volume rm --force does not let you remove local volumes that are in use by stopped containers
Steps to reproduce the behavior
- Run a container that uses a local volume
- Stop the container
- Attempt to remove the volume with
docker volume rm --force - Observe that you can not remove the volume, even though the container is stopped and you passed
--forceon the command line.
Output of docker version:
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:16:44 2018
OS/Arch: linux/amd64
Experimental: false
Output of docker info:
Containers: 8
Running: 0
Paused: 0
Stopped: 8
Images: 198
Server Version: 18.09.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-39-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.516GiB
Name: tabor-xps13
ID: RLNX:PMFX:WV3J:W2FZ:ISLM:4XB2:BTNF:7UHX:ZTDT:BOLR:LA4S:LRRX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support
I'm running on Ubuntu 18.04.
This is working as expected. We cannot remove volumes which are referenced.
The force flag is in case the volume driver fails for whatever reason, the user can force tell docker to go ahead and delete that from docker.
Perhaps that is true. Neither the command line help nor the man page say that. The command line helps says that --force will Force the removal of one or more volumes.
I changed top-level volume but remained the same name. Docker told me that I should remove the same name one. I run the command with '-f', understanding what Im doing, but couldn't delete that.
What worked for me was just pruning my environment:
$ docker container prune
$ docker volume prune
$ docker network prune
Does anyone know how to de-reference volumes? My approach is destructive but I didn't care. Others may want something less nuclear.
The error message you receive in these situations is like:
Error response from daemon: unable to remove volume: remove mydata: volume is in use - [1cbcfa3d47a32db7b0075e113216f7146a436a4da22a97dc2f7b60c68de95c3d]
What is that ID? How can it be used to de-reference the volume?
I think many have been bit by this type of thing: https://serverfault.com/q/892656/409848
Volumes can only be referenced by containers. The ID is a container ID.
The only way to de-reference the volume is to remove the container.
The --force option sets cfg.PurgeOnError option, which is used here; https://github.com/moby/moby/blob/2df693e533e904f432c59279c07b2b8cbeece4f0/volume/service/service.go#L148-L162
v, err := s.vs.Get(ctx, name)
if err != nil {
if IsNotExist(err) && cfg.PurgeOnError {
return nil
}
return err
}
err = s.vs.Remove(ctx, v, rmOpts...)
if IsNotExist(err) {
err = nil
} else if IsInUse(err) {
err = errdefs.Conflict(err)
} else if IsNotExist(err) && cfg.PurgeOnError {
err = nil
}
Perhaps the flag description (and API description https://docs.docker.com/engine/api/v1.40/#operation/VolumeDelete) should be updated to describe that it's used to not produce an error if the volume doesn't (or "no longer") existed.
I guess the option was added for situations where a race-condition cause problems; volume in the process of being removed by the volume driver, and no longer present at the moment when the actual "delete" is attempted.
So a volume can be in use even by a stopped container. Right? In which case you would need to do:
$ docker container stop <container-id>
$ docker container rm <container-id>
Followed by:
$ docker volume rm <volume-id>
For some reason in my rush to do what I needed to do, I did not make the connection that a stopped container still references a volume. I think the above combination would have also worked for me. It seems glaringly obvious in hindsight.
So a volume can be in use even by a stopped container. Right?
Correct: the reason for marking those volumes as "in use" is that;
- a stopped container isn't "gone", and could be started (again)
- someone could
docker createa container (using a volume), anddocker startit separately - the above also applies to
docker run(which is a combination ofdocker createfollowed bydocker run: marking the volume as "in use" is to prevent race conditions where the volume would potentially be removed between those steps)
In short: containers should generally not contain "state", and be considered ephemeral (you can (docker pull the image, and) start a new new container, but volumes contain data that should be preserved, so we try to prevent accidental removal of volumes for situations listed above.
In which case you would need to do:
Yes; if you want to destroy / remove a container, including "anonymous" volumes that may be attached, you could use docker rm -fv <container-id>; -f forces killing the container if it's still running (without waiting it to shutdown cleanly), and the v (-v) removes anonymous volumes that are attached.
After that, docker volume rm <volume name> allows you to remove named volumes.
I have the same problem, I have deleted all the containers, but I cannot remove the volume.
And I do not have the containers to try docker rm -fv <container-id>
w@w:~$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0875ca29dc35 ark74/nc_fts "/tini -- /usr/local…" 2 weeks ago Up 5 minutes 127.0.0.1:9200->9200/tcp, 127.0.0.1:9300->9300/tcp fts_esror
w@w:~$ sudo docker volume ls
DRIVER VOLUME NAME
local esdata
local snipe-vol
local snipesql-vol
w@w:~$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0875ca29dc35 ark74/nc_fts "/tini -- /usr/local…" 2 weeks ago Up 6 minutes 127.0.0.1:9200->9200/tcp, 127.0.0.1:9300->9300/tcp fts_esror
w@w:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
20ceef544a8b bridge bridge local
799b799a3ffa host host local
58047c88bc54 none null local
w@w:~$ sudo docker volume rm -f snipe-vol snipesql-vol
Error response from daemon: remove snipe-vol: volume is in use - [d5ef36f089738154e6e611b802e570b9416a9bc939ce7c0f4801e6a57ddf05f1]
Error response from daemon: remove snipesql-vol: volume is in use - [c51481ca6ee8f0240281b5f139689a7e99fc9d44d406fd3f30a76a84eacfa623]
Salved using
docker ps -a -q
Then compare the list with the ones that I actually still have sudo docker container ls and delete the ones that I have already deleted. using docker rm -fv <container-id>
docker ps -a -q Then compare the list with the ones that I actually still have sudo docker container ls and delete the ones that I have already deleted. using docker rm -fv
This is very unintuitive but I can confirm that it works.
Salved using
docker ps -a -q
Then compare the list with the ones that I actually still have sudo docker container ls and delete the ones that I have already deleted. using docker rm -fv <container-id>
It worked for me. Thank you! Appreciate it.