JTMarsh556
JTMarsh556
I am in for the answer here. Ollama is very slow for me. I switched to llama and it is much faster. There is something broken with ollama and ingestion
I updated the settings-ollama.yaml file to what you linked and verified my ollama version was 0.1.29 but Im not seeing much of a speed improvement and my GPU seems like...
I have to say I completely agree with the OP. Nearly everything is funneled into the "Open"AI tube and I definitely feel that most other options are largely alienated. As...
I get this same issue when I attempt to run locally. If I run it with OpenAI something is being done in the background that allows me to move forward....
What do you mean "without serve it works"? I am having the same issue, what exactly did you do? Thanks