mirek190
mirek190
still waiting to merge ....
> @EvilBT can you write that in English please? > > @mirek190 - can you provide full console log, since it would be difficult to help you if we only...
template for llamacpp main.exe --model models/new3/Phi-3-mini-4k-instruct-fp16.gguf --color --threads 30 --keep -1 --n-predict -1 --repeat-penalty 1.1 --ctx-size 0 --interactive -ins -ngl 99 --simple-io --in-prefix "\n" --in-suffix "\n" -p "You are a...
Tested with llamacpp. fp16 and Q8 version. Do you also have a problem : generating tokens until I manually stop it? I had to add -r "----" -r "---" -r...
 Not too bad ... not level llama 8b but still phi-3 ```` A father and son are in a car accident where the father is killed. The...
Any update on 128K? :)
but it is not llama.cpp ;)
That model for coding is better than anything offline so far. It is level of gpt 3.5.
Windows Tested with CMD and Power shell .
Of course I use the history why I wouldn't? Do you not use a history of your inputs by up / down arrows? Earlier arrows up /down were works perfectly....