abc-nix
abc-nix
Hi. I think @JohannesGaessler has already started to work on it. At least they published #1970 pull request with a rough way to make it work.
Edit: Moved to #3366. I made its own issue, as I think the OP's issue is completely different (and may be related to icon paths or format) and they specified...
Thank you very much for all the work and time you are giving to bring a very good functioning Falcon to the masses. Thank you.
Fantastic, @RobertMueller2. It works. I can only test on sway right now, but it works and fixes the issue on multi-monitors (at least on my system). Many thanks.
If your VRAM cannot take the full model, offload only the necessary layers using the `--tensor-split` option. In this way, you can decide how many layers go to each device....
Sorry. I wrote the above comment on another device and was writing from memory. The separator for `--tensor_split` is either a comma or a forward slash. On another note, you...
Did you download a model? What command did you use to do so? Didn't the terminal output tell you where the model was downloaded?
Same issue with dual 3090 + CPU offload, latest llama.cpp (built from source), with GLM 4.5 on Linux (PP for long context is halved). Things tested: - `-kvu` didn't work...
> I have two nodes: 192.168.13.12 and 192.168.13.44 > > I run on each of my two nodes(CPU-Only) this: > > `./rpc-server -p 50052 -H 192.168.13.44` `./rpc-server -p 50052 -H...
@hbuxiaofei Does it work without RPC? Try with only 2 layers first `-ngl 2`. The repeated message `Null buffer for tensor passed to init_tensor function` indicates a problem with how...