drazdra
drazdra
### What is the issue? prompt_eval_count parameter is absent on some calls, on other calls it returns wrong information. 1. i tried /api/chat with "stablelm2", no system prompt, prompt="hi". in...
upon every restart of finetune i see: "train data seems to have changed. restarting shuffled epoch." i looked up where it happens, added debugging line and it turned out that...
When the stream=true, ollama doesn't return tool request in the final "done" message, instead it returns it just part by part as if it was a regular reply. At that,...
Prompt evaluation can take huge time, especially with long context. it can literary be hours. Right now Ollama just hangs in that phase. Is it possible to have some messages...
i've stumbled upon dynatemp and have a question/proposal. I believe, that the thing that was missed during dynatemp implementation is the underlying concept of what it's needed for. Prompts may...
### Name and Version version: 4529 (12c2bdf2) built with MSVC 19.29.30157.0 for ### Operating systems Windows ### Which llama.cpp modules do you know to be affected? llama-server ### Command line...