for-linux icon indicating copy to clipboard operation
for-linux copied to clipboard

max depth exceeded

Open katsar0v opened this issue 7 years ago • 9 comments

  • [x] This is a bug report
  • [ ] I searched existing issues before opening this one
  • [x ] This is a feature request

Expected behavior

Build image

Actual behavior

Error max depth exceeded

Steps to reproduce the behavior

I am building an image, which inherits from another image, which inherits from a third image. I get this error, which is not documented at all? Is there any documentation / info about this?

Output of docker version:

Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:25:03 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:23:27 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Output of docker info:

Containers: 7
 Running: 5
 Paused: 0
 Stopped: 2
Images: 1291
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.13.0-46-generic
Operating System: Ubuntu 17.10
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.688GiB
Name: katsarov-ThinkPad-T470s
ID: R5S6:67UI:QUZK:YBZS:VXLQ:YEFK:SEGU:7EAO:WTUZ:63R5:V3UX:NEJT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false```



katsar0v avatar Aug 23 '18 10:08 katsar0v

The overlay2 storage driver can at maximum have 125 layers, so I suspect that that's what you're running into (error message is defined here; https://github.com/moby/moby/blob/8e610b2b55bfd1bfa9436ab110d311f5e8a74dcb/layer/layer.go#L53)

First of all, I would recommend trying to optimize your images / Dockerfile, as mounting this many layers can affect performance (it will take longer to start the container, as all layers have to be mounted, and modifying files inside the container may be less performant)

We should document this though, and (if possible) make the error message more clear.

/ cc @dmcgowan @ahh-docker

thaJeztah avatar Nov 14 '18 23:11 thaJeztah

Yes, would be good if the error is documented and explained why. I could have one script as well installing and set everything in 10 steps, instead of 125 layers, but layers have cache and etc.. are more useful. I find this 125 layer limitation stupid, maybe because I don't understand it and it is not documented

katsar0v avatar Nov 16 '18 18:11 katsar0v

@thaJeztah can you please file me at documentation issue for this in docker.github.io? Thanks.

ahh-docker avatar Nov 16 '18 19:11 ahh-docker

I would recommend trying to optimize your images / Dockerfile, as mounting this many layers can affect performance

@thaJeztah I was not aware of that drawback. it is the same if we split the content in several Dockerfiles that inherits using the FROM directive? Or is it the total amont of layers?

boussou avatar Dec 26 '19 21:12 boussou

It's the total amount of layers; note that not every step in a Dockerfile creates a new layer, only (RUN) steps that modify the filesystem will introduce a new layer, and (depending on your use-case) you can reduce the number of layers in the final image with multi-stage builds by (e.g.) copying artefacts from intermediate stages to the final stage.

If I'm not mistaken, the current limit of 125 layers is due to the kernel's ARG_MAX, which limits the number of arguments / length of arguments that can be passed when mounting the layers (this limit can be raised in kernels, but is not something that could be relied on as it would make those images non-interoperable on systems that don't have the custom configuration)

thaJeztah avatar Dec 27 '19 09:12 thaJeztah

Thanks. I have to read about multi-stage builds.

My question was more about the performance impact at runtime. Do you have references about that? If so, I would probably rewrite in order to put everything in a bash script and execute it in one single RUN command.

boussou avatar Dec 27 '19 11:12 boussou

if it is the case, about the performance impact, I really think now it became a bad design. Instead of creating a layer on each RUN command, I think they shall be treated like a bash command, and let the user decide when to "snapshot" a layer. ie with a SNAPSHOT or LAYER command ;-)

boussou avatar Dec 27 '19 14:12 boussou

1

wyaopeng avatar Jun 07 '22 11:06 wyaopeng

It's the total amount of layers; note that not every step in a Dockerfile creates a new layer, only (RUN) steps that modify the filesystem will introduce a new layer, and (depending on your use-case) you can reduce the number of layers in the final image with multi-stage builds by (e.g.) copying artefacts from intermediate stages to the final stage.

If I'm not mistaken, the current limit of 125 layers is due to the kernel's ARG_MAX, which limits the number of arguments / length of arguments that can be passed when mounting the layers (this limit can be raised in kernels, but is not something that could be relied on as it would make those images non-interoperable on systems that don't have the custom configuration)

I feel, this arbitrary restriction based on kernel restrictions should be re-evaluated as incompatible kernels will become less and less prevalent. But more important: why must the tooling be bound to this ARG_MAX limit in the first place? Seems to me there would be a solution. As in: is the tooling is really reliant on passing a full history of hashes through execv or the like? And if it is -- this should be avoidable surely!

I mean, from a naive perspective, a list of arguments should almost always be replaceable by a temp file or even a named pipe, where the arguments are pushed through separated by zero-bytes.

Naively, I don't know the first thing about docker internals.

simlei avatar Nov 12 '24 20:11 simlei