kbals

Results 3 comments of kbals

I have tried to debug this, and I found that the LLM output of OpenAI is significantly different from the output from Ollama (Llama 3.2 and Mistral). The latter don't...

I tried `gemma2:9b` and it is working! Thanks for your input.

FYI, `llama3.1:8b` also works fine. So probably just update the documentation to indicate the two working ollama models is a great start for newcomers.