Docker layers local cache
Is your feature request related to a problem? Please describe.
As mentioned in Slack, the python Docker image can be quite large (around 1GB total). Since running docker jobs in Infrabox runs Docker-in-Docker, there is no cache available for the layers so everything has to be fetched again. This isn't solved by the existing image cache solution, since the backend for the images served by the infrabox registry would still need to fetch from S3, which doesn't help reduce loading times compared to simply downloading the layers (due to the need to then push the updated image).
Describe the solution you'd like
Ideally, we would like a way to use a local cache of images (e.g. on the node) that can be used to speed up builds. For instance, multiple sequential jobs that use python:3.6 as a base layer would not need to download the whole image again. At a minimum, having sequential jobs reuse a local cache would improve things a lot, allowing newly-built images to be passed more quickly than waiting for a full download each time. However, the ideal solution would have a transparent approach all the jobs in the nodes can use the same ImagePullPolicy that Kubernetes uses and use the same image cache to speed up the builds.
Describe alternatives you've considered The existing image cache was actually slowing things down, since our issue wasn't trying to reduce the CPU time of the build but the transfer time of larger containers.
Additional context See the above Slack link for the thread discussing this.