Evan Morgan

Results 6 comments of Evan Morgan

I'm getting the same issue, so would be good to know if you found a solution.

Yes, this would be really useful!

I'm also having this issue. Everything works fine locally, but on the remote server the agent is unable to create or edit files - it tries to work around this...

I managed to fix this by creating a `.vscode` directory in the workspace folder on the remote server, then adding the following to a `settings.json` file in that folder: ```...

> > ```shell > > make libllama.so > > ``` > > it gives me error" LLAMA_ASSERT: llama.cpp:1800: !!kv_self.ctx",how to solve it? the command is ` python -m llama_cpp.server --model...

Thanks @glaudiston . The llama.cpp lib works absolutely fine with my GPU, so it's odd that the python binding is failing.