tennaito
Results
3
comments of
tennaito
I was wondering where was that option. In my memories, it always existed. But in the beginning I was using only the OpenAISettings with Ollama (that´s why). Nowadays we can...
The code below worked well with **Qwen3** and I specified temperature to 0 (zero) to minimize parsing problems from llm generated json: [Ollama Qwen3:8b 5.2GB Model ](https://www.ollama.com/library/qwen3:8b) _Executed this code...