gabilanbrc
gabilanbrc
Hi team Seems that I'm having a similar issue After struggling some time, I have found on Google that the right address to use in Windows for accessing the host...
> I managed to use the official ollama image (ollama/ollama) and not litellm/ollama. also (if you still haven't), try adding > > ``` > extra_hosts: > - "host.docker.internal:host-gateway" > ```...
> yeah there is: https://docs.danswer.dev/gen_ai_configs/ollama Thanks again! Unfortunately I was unable to make it work using the ollama Windows Installer or Docker, same error Not sure how to move forward...
#1458 Seems to be related to this
I'm having the same issue, that Endpoint (http://host.docker.internal:11434/) does not reach the Ollama in Windows 11 Home In Danswer you will see a 404 page not found error message Instead...
As an additional test, I did a curl test querying the llama2 model and it worked ``` C:\>curl -X POST -H "Content-type: application/json" --data "{\"model\": \"llama2\", \"prompt\": \"3*127?\"}" http://docker.for.win.localhost:11434/api/generate {"model":"llama2","created_at":"2024-05-20T20:28:20.8647932Z","response":"\n","done":false}...
Seems that a configuration was needed in docker to make it work  After that and using http://host.docker.internal:11434 I was able to connect
In my case clicking that option was enough
Hi! I'm facing a similar issue with a RTX 3050 in Windows, similar responses of ollama ps and nvidia-smi.exe. When I use `curl -X POST -H "Content-type: application/json" --data "{\"model\":...
> @Menghuan1918 you can try forcing more layers via `num_gpu` and rely on Windows VRAM paging support to see if that yields better or worse performance than the CPU offload...