Anthony Wu
Anthony Wu
Ah I see you declared it as a `BSD` license type in `setup.py`. Sorry I overlooked that. I still suggest an explicit `LICENSE` file though.
There are variants to the BSD license, you should be explicit regarding which of of the 3 you'd like to use: http://choosealicense.com/licenses/ Alternatively, you can choose the very common MIT...
I think this was implemented in `chat(...)`: #213 https://github.com/ollama/ollama-python/pull/213/files#diff-0f246a9c084dd2ef9d4a58c02fb818ac4a114c34877e558cac88ab351daae9eeR183 `help(ollama.Client.chat)` in latest client `main` branch:P ``` | chat(self, ..., tools: Optional[Sequence[ollama._types.Tool]] = None, ... ```
> I tried to set the environment variable common mistake is not have `export OLLAMA_MODELS=`ed the variable when you set it, I'd double check that. ----- The model dir on...
@synacktraa I believe you would be interested in reviewing #238 or collaborating on your new tool-parse library. I think we're attacking the same problem from slightly different places but can...
I think this is due to using an older model without tool calling support. Updating to `llama3.1` should work. #237 fixes the doc and should be the resolution.
Unable to reproduce your exception. I think this problem might go away if you `ollama pull llama3:latest` and `git pull origin main` on this repo and maybe re-do `pip install...
> * For piping from stdin use `-` as in `mlx_whisper -` . That is what we do in MLX LM so it is more consistent. Done. I agree self...
Is there robust buildkit cache mounting for pypi caching going on between repeated image builds during rapid iteration? Or maybe the depot builds are going to diff machines so there's...
I would totally be OK if the cache is timed out after a reasonable period.