MANOJ MANIVANNAN
MANOJ MANIVANNAN
Something similar on my side, I have RTX 4090, running ollama on docker does not recognize my nvidia GPU ```bash ~/A/ollama $ docker run -d -e CUDA_VISIBLE_DEVICES=0 --gpus=all -v ollama:/root/.ollama...
I can confirm this as well, using the latest ollama image solves the issue for me ```bash ~ $ docker run -d --gpus=all -v ollama:/root/.ollama -p 11435:11434 --name ollama ollama/ollama...
Facing same issue
facing the same issue.
thanks @keyvank for the quick response. So how can i run the model in inference mode ?