Daniel

Results 1 comments of Daniel

If you `brew install llama.cpp` and then run the server locally (after downloading one of the models): `llama-server -m ~/.codegpt/models/gguf/qwen2.5-coder-1.5b-instruct-q8_0.gguf --port 51150` you can start using it. This doesn't really...