localGPT
localGPT copied to clipboard
Docker not building: ModuleNotFoundError: No module named 'utils'
The same problem is on Windows and Ubuntu:
docker build -t localgpt .
Will generate this:
[+] Building 222.7s (16/17) docker:default
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 1.34kB 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 0.4s
=> [auth] docker/dockerfile:pull token for registry-1.docker.io 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1@sha256:ac85f380a63b13dfcefa89046420e1781752bab202122f8f50032edf31be0021 0.0s
=> [internal] load metadata for docker.io/nvidia/cuda:11.7.1-runtime-ubuntu22.04 0.4s
=> [auth] nvidia/cuda:pull token for registry-1.docker.io 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 84B 0.0s
=> [stage-0 1/9] FROM docker.io/nvidia/cuda:11.7.1-runtime-ubuntu22.04@sha256:07804eb7002b8411f3ec1f8b17e707fb6f8fa50572923787c3966a0af1ef4b92 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 1.51MB 0.0s
=> CACHED [stage-0 2/9] RUN apt-get update && apt-get install -y software-properties-common 0.0s
=> CACHED [stage-0 3/9] RUN apt-get install -y g++-11 make python3 python-is-python3 pip 0.0s
=> [stage-0 4/9] COPY ./requirements.txt . 0.1s
=> [stage-0 5/9] RUN --mount=type=cache,target=/root/.cache CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --timeout 100 -r requirements.txt llama-cpp-python==0.1.83 218.5s
=> [stage-0 6/9] COPY SOURCE_DOCUMENTS ./SOURCE_DOCUMENTS 0.2s
=> [stage-0 7/9] COPY ingest.py constants.py ./ 0.2s
=> ERROR [stage-0 8/9] RUN --mount=type=cache,target=/root/.cache python ingest.py --device_type cpu 2.4s
------
> [stage-0 8/9] RUN --mount=type=cache,target=/root/.cache python ingest.py --device_type cpu:
1.963 Traceback (most recent call last):
1.963 File "//ingest.py", line 10, in <module>
1.963 from utils import get_embeddings
1.963 ModuleNotFoundError: No module named 'utils'
------
Dockerfile:18
--------------------
16 | # If this changes in the future you can `docker build --build-arg device_type=cuda . -t localgpt` (+GPU argument to be determined).
17 | ARG device_type=cpu
18 | >>> RUN --mount=type=cache,target=/root/.cache python ingest.py --device_type $device_type
19 | COPY . .
20 | ENV device_type=cuda
--------------------
ERROR: failed to solve: process "/bin/sh -c python ingest.py --device_type $device_type" did not complete successfully: exit code: 1
Related to: Issue #739
Dear, you can refer to Docker Build no module named 'utils' #739
the solution is to modified your first few lines in docker file into:
FROM nvidia/cuda:11.7.1-runtime-ubuntu22.04 COPY ingest.py constants.py utils.py ./ RUN apt-get update && apt-get install -y software-properties-common && apt-get install ffmpeg libsm6 libxext6 -y