commit4ever
commit4ever
https://github.com/ollama/ollama/blob/ecc133d843c8567b27ff3bdc9ff811ecad99281a/docs/faq.md?plain=1#L189 use keep_alive param
I use a set interval in secs and that has worked well.
u can use multiple ports using smthing like this.. OLLAMA_HOST=10.1.0.1:114XX OLLAMA_MODELS={PATH} OLLAMA_DEBUG=1 ollama serve
Could you elaborate on what is wrong so others maybe could patch till the PR is done?
Hi @n4ze3m . if this enhancement is being undertaken then can i suggest that we add a context menu to select the text and pass populate the message box with...
Hi @n4ze3m yes that way would work.
@n4ze3m no ollama doesnt not have any auth built it. the expectation is that it sits within a local host env behind a proxy server and app/backend and that auth...
hi @mcharytoniuk - thanks for this interesting project ! we use a combination of llama-cpp -server and ollama - both running on dockers and have implemented our ow python based...
@mcharytoniuk hi - sorry for the late reply. Yes , supporting the OPENA AI API style would work. Btw came across this issue tiday https://github.com/ollama/ollama/issues/6492 might be relevant as u...
thanks for this awesome extension! i would add as a requirement where wherein ollama/llama.cpp sit behind a proxy and so is it possible that "stream" be toggle/option in both config...