Old Man
Old Man
I think we really need some official clarification on this subject. Things are a mess over at huggingface, and thus, the community at large. Can someone from meta *please* do...
@jspisak please see my previous comment. Also, sorry if you're not the right person to tag. I didn't know who to tag. *Please* prioritize this
Seems strange this wasn't included in the first place...
It seems only certain IQ quants are supported? Could we get the rest supported or can a list of the supported ones be posted prominently on the main readme? Kind...
> This should be resolved by #3218 Not fixed for me. Before updating, ollama didn't use any (significant, at least) memory on startup. Now, the instance mapped to my 1080...
Why wasn't this tested before release?
This really needs to get fixed. Currently, Ollama basically (please correct me if I'm wrong): - runs at start without user's knowledge or permission - is using GPU resources even...
> I've already been thinking about this. It'll probably be in some kind of `verbose` output. IMHO, it should be the default, not verbose
Can confirm 0.1.38 seems to want more video memory
Figured it out. Ollama seems to think the model is too big to fit in VRAM (it isn't - it worked fine before the update). There is a lack of...