Rajakumar05032000
Rajakumar05032000
Have you guys pulled the latest code ? Because fix had been already merged, 12 hrs ago approx.. If yes, please check whether you have configured your API's correctly for...
If the issue still exist, can someone send the full logs and also would like to know which model you are using? ( I have tested well with Ollama &...
> Both Ollama and Groq, this is log from UI. > > > Agent is not in completed state As I can see from the logs, token_usage is 322 &...
> Aparentely because the agent's state on backend is null > >  Those who are facing issues, can you please create a new project in devika UI and and...
> keep sleeping > > > a new project and first prompt Can you please send the full devika UI screenshot and logs of devika_agent.log file (which is present in...
I would like to ask, what are the advantages that we are getting from KoboldCpp over ollama or even llama-cpp-python ? llama-cpp-python also supports multiple OS & also GGUF and...
Ollama also has hardware acceleration for Nvidia GPUs and AND GPUs as well. I agree Ollama for Windows is in early stages. Coming to `llama-cpp-python` , it supports CUDA, METAL...
Already created PR , which solves this issue. PR : #190
If the issue still exist, can someone send the full logs and also would like to know which model you are using? ( I have tested well with Ollama &...
Those who are facing issues, can you please create a new project in devika UI and and then add your query & check? Only first prompt in a devika project,...