localGPT icon indicating copy to clipboard operation
localGPT copied to clipboard

Error while getting response

Open Jimit-MTPL opened this issue 7 months ago • 5 comments

I am facing this error in response in chat UI: Error: 404 - {"error":"model "qwen3:8b" not found, try pulling it first"}

Although I am using different model then why it is trying to use this one.

Image

Jimit-MTPL avatar Jul 16 '25 13:07 Jimit-MTPL

@Jimit-MTPL I am going to try to reproduce this. What OS are you on?

PromtEngineer avatar Jul 17 '25 04:07 PromtEngineer

I'm on windows

Jimit-MTPL avatar Jul 17 '25 06:07 Jimit-MTPL

I have changed default model as per my preference in the code. Now the normal question answering is working but when RAG pipeline get trigger I am not getting any response even after waiting for too long.

Here is Update: I have replaced the ollama setup as LLM from whole code base and added the OpenAI, still the problem is same. Although I don't wanted to add OpenAI but for testing purpose I have done that.

Jimit-MTPL avatar Jul 17 '25 06:07 Jimit-MTPL

@Jimit-MTPL I am assuming you don't have qwen3:8b on ollama? If that's the case, can you download it and then try switching to another model. I think this issue might be coming from some of the defaults. Will debug it on my end.

PromtEngineer avatar Jul 18 '25 05:07 PromtEngineer

@PromtEngineer Yes I don't have qwen:8b on ollama. But I have changed the default model to other ollama model and now this time I didn't got any error but now only normal LLM chat is working but from RAG pipeline I am not getting any response. So again i replaced the Ollama with OpenAI in whole codebase just for testing but still there is same issue.

Jimit-MTPL avatar Jul 18 '25 09:07 Jimit-MTPL