MikeNatC
Results
2
comments of
MikeNatC
Yes, agreed. I like to switch between various models and would like Local AI to be able to handle the automatic unloading of previous models to free up VRAM.
I am having this issue as well with my iPad. My Ollama instance is hosted remotely on a home server but is accessible on my iPad via Tailscale. Interestingly, I...