huankumo

Results 2 comments of huankumo

Thanks to @mouramax for the answer. I faced the same issue when calling `gemma` model via `lm_studio` provider with a pydantic model as response format. apparently there's no issue when...

Hi. I have the same issue on *Nvidia GeForce RTX 4090* + *cuda driver version 12.3* + *ollama version is 0.6.0* and noticed that when invoking gemma:27b model through ollama...