container cap not honored
-
[ ] docker-plugin version you use 1.2.2
-
[ ] jenkins version you use 2.263.3
-
[ ] docker engine version you use 19.03.12
-
[ ] details of the docker container(s) involved and details of how the docker-plugin is connecting to them not relevant.
-
[ ] stack trace / logs / any technical details that could help diagnose this issue all images and explanations are already here.
bottom line - the container cap value I set in the docker cloud is not being honored by the plugin. I'm creating a simple pipeline job which only start a dockerNode with some image, and in then sleeps for 100 seconds. no matter how many docker "clouds" I define, and no matter what is the value I set in the container cap - it is not honored. all the containers are created on the first docker cloud defined.
I would expect a spread of the images, or at least not creating more the allowed value of containers on the same cloud.
more information can be provided upon request.
FYI dockerNode is experimental functionality and, sadly, the author of it hasn't persued it much in the last few years.
I suspect that it hit a dead end and the result was the creation of the docker-workflow plugin (aka the docker pipeline plugin) which was designed with pipelines in mind, whereas the docker-plugin pre-dates pipelines by many years.
I've just checked the code and you're half-right - it uses the first DockerCloud it finds ... except it doesn't actually use the DockerCloud at all, it just uses the endpoint defined within the first docker cloud - any docker containers made by this dockerNode code don't belong to the DockerCloud, they "just happen" to be created within the same docker daemon it's pointing to.
FYI you don't even need to create/define a DockerCloud if you specify the dockerHost and credentialsId within the pipeline - the host and credentials from the first DockerCloud are merely used as a default if the pipeline doesn't specify them.
If you want the docker-plugin to honour the container caps then you need to avoid using the dockerNode experimental functionality and stick to the old boring "agent" functionality where you define templates within each cloud and pipelines don't need to know (or care) that they're being run within a docker container.