kuan
kuan
I run it with model "llama-2-7b-chat.Q4_K_M.gguf" on the server but it's good on my M1 MacBook Pro (MacOS: Sonoma 14.4). No idea why it was terminated silently. Correction: The process...
After investigated to llama.cpp, I got why it's occurred core dump! How can I do next step? #./main -ngl 32 -m /user/models/llama-2-7b-chat.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n...
@martindevans After I refreshed to newest llama.cpp and recompiled these projects, then I replaced with two files LLamaSharp.dll and libllama.so to my dotnet project under Debian 12, it's workable, so...