Optimise safety dockerfile
Close #2579. This runs a bit faster on my local machine. Unfortunately I think the main holdup is the PyTorch/CUDA install (also slows down the inference worker image build) which we cannot do much about
Yes, I tried hard to optimize it, but besides Pytorch/Cuda install, I think, also that LAION-AI/blade2blade installation takes the most time.
Aslo copying whole files in command:
COPY --chown="${APP_USER}:${APP_USER}" --from=build /build/lib ${APP_LIBS}.
Yes, I tried hard to optimize it, but besides
Pytorch/Cudainstall, I think, also thatLAION-AI/blade2bladeinstallation takes the most time.Aslo copying whole files in command:
COPY --chown="${APP_USER}:${APP_USER}" --from=build /build/lib ${APP_LIBS}.
I hope blade2blade installation will be faster since yesterday's merge of #2505. Previously pip had to build it from the Git repo but now it is on PyPI so we can pip install directly from there