Matthieu Mérigot-Lombard
Matthieu Mérigot-Lombard
Is the service `llm` running ? Or are you using the `llm-gpu` service ?
You can get more information about what went wrong using 'docker logs pull-model', what does it show?
> i have some issue to buil docker containerin win11+docker, please help) : > ## 32.15 ldconfig: File /lib/x86_64-linux-gnu/libperl.so.5.36.0 is empty, not checked. > 32.15 ldconfig: File /lib/x86_64-linux-gnu/libapr-1.so.0.7.2 is empty,...
Doesn't seem to be the same issue as #101, you have a 404 error in the repository section instead of a dpkg error. Are you sure that your repositories are...
> The docs told me to add that URL to the .env file. However, I certainly don't have server > running there. If the container `genai-stack-llm-gpu-1` is running, then you...
After looking a bit around, it seems that nvidia-container-toolkit needs docker-ce installed as root to work (which isn't the case with Docker Desktop?). The obvious way to resolve this issue...
You mean the download restart from zero at each startup? It shouldn't do that, are you using `docker compose down` instead of `docker compose stop` by any chance, that would...
Are you using `OLLAMA_BASE_URL=http://llm-gpu:11434` in your `.env` file ? If yes, is your `llm-gpu` service starting before the `pull-model` service ?
The dependency makes sense, but the default hostname should already by `llm-gpu`. But if it works, it works!
That is probably an issue with the Python version, some packages like pyarrow don't have yet a build wheel for Python 3.12 (then you need to build it, and it...