Old Man
Old Man
I still get this error attempting to run from windows and wsl2. Also, does anyone know how to do "improve" from docker?
> Does @k1lgor s comment resolve this @oldmanjk @cor277 ? Can we close this or do we need changes to the codebase? How do I implement this when I'm running...
> @oldmanjk are you still experiencing this error? It's been a while since this issue was last active and if it's not relevant anymore, I'd like to close it. >...
@ggerganov Should this be reopened?
The current logic is completely borked. On my 13900K (24-core, 32-thread), ollama defaults to using four cores. If I set it to use 24 cores, it uses 16. If I...
> I have ollama set up on VM for testing, with 12 vCPU (4 socket & 3 core topology) and 16GB RAM (no GPU). I am not sure where to...
> Thanks, @oldmanjk! had not used glances prior, and is super useful. Attaching screenshots when running basic questions "sky blue", "tell a joke", "short story", etc. Disck i/o doesn't stand...
> > > Thanks, @oldmanjk! had not used glances prior, and is super useful. Attaching screenshots when running basic questions "sky blue", "tell a joke", "short story", etc. Disck i/o...
Not fixed for me. Before updating, ollama didn't use any (significant, at least) memory on startup. Now, the instance mapped to my 1080 Ti (11 GiB) is using 136 MiB...
Any progress on this? I'd like to try this project, but it's proving extremely difficult