Images generated with kaniko are larger than images generated with docker
Hello everybody,
I've recently started using kaniko for my CI/CD pipelines and I've found that docker images generated with kaniko are larger (sometimes noticeably larger) than images generated with raw docker.
I've put together a small Dockerfile similar to some of the stuff we use: Ubuntu with a conda installation and a custom environment.
The docker image was built and pushed like this (where $myregistry is the URL to our local docker registry):
docker build -f Dockerfile-poc -t dockerpoc .
docker tag dockerpoc $myregistry:dockerpoc
docker push $myregistry:dockerpoc
The kaniko image was build and pushed like this within a .gitlab-ci.yml file. The --compressed-caching=false flag is needed due to memory constraints:
.kaniko_build_local:
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "gitlab-ci-token" "${CI_JOB_TOKEN}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile-poc"
--destination "${CI_REGISTRY_IMAGE}:kanikopoc"
--compressed-caching=false
In the Gitlab web interface I can see that both images have different sizes:
-
dockerpoc: 897.92 MiB -
kanikopoc: 1005.27 MiB
If I download both images and save them to a tar file, I can still see local differences, but smaller:
$ ls -l dockerpoc.tar kanikopoc.tar
-rw------- 1 rinze users 2690073600 Sep 23 13:28 dockerpoc.tar
-rw------- 1 rinze users 2746887680 Sep 23 13:28 kanikopoc.tar
For other images that we're using, that are considerably bigger, the difference is also larger:
$ ls -l internal_project_*
-rw------- 1 rinze users 4809613312 Sep 23 13:56 internal_project_docker.tar
-rw------- 1 rinze users 5289860608 Sep 23 13:57 internal_project_kaniko.tar
I was wondering if this effect is caused by some internal kaniko mechanics. Otherwise, is there any flag I could enable on our side to make the final images smaller?
The files Dockerfile-poc and environment.yaml can be found here: https://gist.github.com/rinze/0d78263d480526c8cb3ab90aa0fc3e6a
Triage Notes for the Maintainers
| Description | Yes/No |
|---|---|
| Please check if this a new feature you are proposing |
|
| Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
| Please check if your dockerfile is a multistage dockerfile |
|
I've managed to bring the image size down to basically the same size as the one obtained with plain docker by removing the local .git directory before building the image with kaniko. I've found this issue that's already closed, but looks like I'm being bitten by it: https://github.com/GoogleContainerTools/kaniko/issues/466
This fixed it, right before the first line script line in the original pipeline job:
script:
- rm -rf "${CI_PROJECT_DIR}/.git"
I have the same issue and I am sure that my .git folder is not going to double the image size.
I have been facing the same issue. The image size difference was huge for me. I also don't think it's the .git that is making the image size inflate. My guess it that the image built by kaniko is pushing layers from builder stage too. I had reported this issue separately over #2768 .
Docker file
FROM <private centos image with java 8> AS builder
WORKDIR /builds
COPY . /src/
RUN ./gradlew clean build
#################### Runner Image ########################
FROM amazoncorretto:8-alpine3.17
WORKDIR /home/app
COPY --from=builder /src/build/libs/app.jar ./app.jar
COPY ./startApp.sh .
USER app-user
EXPOSE 8080
ENV CADANCE_WORKER=true
ENTRYPOINT ["/bin/ash"]
CMD ["startApp.sh"]
screenshot from image repository. First image is built by kaniko while the later one was built by docker.

I have been facing the same issue too and I agree with @jaskeerat789 : Pretty sure it's related to multistage builds and pushing layers of builder stage.
In our case, it reaches 1GB when using build stage, and less than 500MB if I don't do multi-stage builds (even if I start from the same base image: FROM docker.io/node:20.4.0-bullseye).
We're using Gitlab CI and cannot push artifact because .tar image is bigger than 1000MB. For now, we're doomed and cannot use multistage best practice
Yep, the same problem. Any changes here?
We're having the same issue. Is there anything we could do about this?
Same issue here.
Image based on openjdk:17-jdk-bullseye generated with docker build is around 850MB, and with kaniko goes up to 2.6GB.
Running into this distributing a compiled and compressed binary package in a kaniko-in-docker CI environment. The binary is ~100MB.
Details
During a separate compilation stage in Gitlab's CI, I generate ~5GB of stuff. Then, a compressed binary archive is built, and all of the uncompressed binary material is removed.
In the next stage, Kaniko is used to build an image containing only the package. The dockerfile copies the package into the build context, and sets the entry point. That is it. The total size in a pure docker local build is ~220MB = size(base_image) + size(package).
However, the Kaniko image size is 1.49GB in the registry. Perhaps more revealing, when pulling the registry image down, a 5Gb layer is downloaded for some reason, which is approximately the size of the entire output of the compilation process.
If I had to guess, Kaniko is somehow including layers outside the scope of the final image.
Why I wound up here
I found I could not successfully perform a mutli-stage build in Kaniko. In the multi-stage build, I compiled the package in a build image, and copied only the compressed binary to the final image. Kaniko hit the OOM killer during the filesystem snapshot with 16GB of memory. This didn't make sense at the time, since I build the image locally with only 10GB.
So I split it apart into a compile stage, and separate build stage.
Summary
What it looks like from my perspective, with a grain of salt since I am new to Kaniko, is one or a few similar underlying bugs related to snapshotting and/or manifest inclusion might be causing the large image sizes in both multi-stage builds and single builds in a CI pipeline with prior work in a docker-managed filesystem (as in when you are building with Kaniko inside of a running docker container).
I have a feeling if I could somehow spawn a fresh file system with only the binary in it during CI, and I could run Kaniko in a native system rather than in docker, Kaniko would build an image that was the expected size.
the same problem. Any changes here?