Brijesh K R
Brijesh K R
I'm getting this error after upgrading viro community package along with react-native & expo packages.
@pawelwiejkut Could you provide us a few more details such as: 1. The model quantization used for gpt-oss:20b ? 2. Are you using the local model or the cloud model...
> 1. `ollama show gpt-oss:20b > Model > architecture gptoss > parameters 20.9B > context length 131072 > embedding length 2880 > quantization MXFP4` > > 2. local model >...
> Hey [@pawelwiejkut](https://github.com/pawelwiejkut), [@Avtrkrb](https://github.com/Avtrkrb) - v1.18.0 brings _a lot_ of changes. This includes upgrading to AI SDK v6, a fix for "ask_followup_question" and many other things. I don't suppose you...
> Hey, it looks like I do not have this particular issue anymore, thanks ! @pawelwiejkut Thanks for confirming.
@JimStenstrom @will-lamerton An additional feature to be considered, having auto compression of the context length without the user explicitly having to call the `/compact` command would be nice. We can...
> `threshold` number `80` Context usage percentage (1-95) that triggers auto-compact The updated spec looks awesome @will-lamerton ! Just a suggestion, I think the starting range for default context usage...
@will-lamerton is anyone working on this or can I pick it up next ? I also have a suggestion, can the architecture be modified so that sessions are stored in...
Ollama specific findings during testing [PR](https://github.com/Nano-Collective/nanocoder/pull/99): When using Ollama as the provider, even after setting OLLAMA_NUM_CTX to 256k for a model that supports that context length, running `ollama ps` still...