Error while getting response
I am facing this error in response in chat UI: Error: 404 - {"error":"model "qwen3:8b" not found, try pulling it first"}
Although I am using different model then why it is trying to use this one.
@Jimit-MTPL I am going to try to reproduce this. What OS are you on?
I'm on windows
I have changed default model as per my preference in the code. Now the normal question answering is working but when RAG pipeline get trigger I am not getting any response even after waiting for too long.
Here is Update: I have replaced the ollama setup as LLM from whole code base and added the OpenAI, still the problem is same. Although I don't wanted to add OpenAI but for testing purpose I have done that.
@Jimit-MTPL I am assuming you don't have qwen3:8b on ollama? If that's the case, can you download it and then try switching to another model. I think this issue might be coming from some of the defaults. Will debug it on my end.
@PromtEngineer Yes I don't have qwen:8b on ollama. But I have changed the default model to other ollama model and now this time I didn't got any error but now only normal LLM chat is working but from RAG pipeline I am not getting any response. So again i replaced the Ollama with OpenAI in whole codebase just for testing but still there is same issue.