genai-stack icon indicating copy to clipboard operation
genai-stack copied to clipboard

--profile linux-gpu CANNOT pull-model

Open meicale opened this issue 2 years ago • 5 comments

I get errors like these:

......
genai-stack-pull-model-1  | Error: Head "http://llm-gpu:11434/": dial tcp  <my ip>:11434: connect: no route to host
genai-stack-pull-model-1 exited with code 1
......
service "pull-model" didn't complete successfully: exit 1

I can run this using --profile linux, but it just too slow. But I can not use my gpu!

meicale avatar Nov 26 '23 12:11 meicale

docker-compose.yml add this

extra_hosts:
      - "host.docker.internal:host-gateway"

under pull-model

DeanXu2357 avatar Nov 27 '23 05:11 DeanXu2357

Are you using OLLAMA_BASE_URL=http://llm-gpu:11434 in your .env file ? If yes, is your llm-gpu service starting before the pull-model service ?

matthieuml avatar Nov 27 '23 10:11 matthieuml

Is there any solution for this problem? I tried suggestions by @DeanXu2357 and @matthieuml, but the problem persists.

zeushomeserver avatar Jan 18 '24 19:01 zeushomeserver

Oddest thing. This was up and running with just those few simple edits to the .env file when I initially set this up, late December. Yesterday, I tried to start it up and got this same error for the pull-service service. I hadn't changed system or Docker settings or run any other models.

Ultimately, I got it running again by specifying the llm-gpu dependency in the pull-model service and the llm-gpu hostname in the docker-compose.yml and restarting the docker service (sudo systemctl restart docker).

pull-model:
  ...
  depends_on:
    - llm-gpu

and

llm-gpu:
  hostname: llm-gpu
  ...

in addition to @DeanXu2357 's recommendation to define the extra_hosts. Without it, at least when the llm-gpu hostname is set, the network service would not run.

Attaching to genai-stack-api-1, genai-stack-bot-1, genai-stack-database-1, genai-stack-front-end-1, genai-stack-llm-gpu-1, genai-stack-loader-1, genai-stack-pdf_bot-1, genai-stack-pull-model-1
Error response from daemon: network fe6b55fd9b9e0b3795bb3f1c089a3799537be31d55407dc0c3094585fd7ff39c not found

In any case, I find Docker to be sensitive to container stops/starts. Sometimes, it's necessary to run docker system prune to clean up issues like the cached network service ID.

usuallycwdillon avatar Jan 23 '24 17:01 usuallycwdillon

The dependency makes sense, but the default hostname should already by llm-gpu. But if it works, it works!

matthieuml avatar Jan 28 '24 16:01 matthieuml