mlsterpr0
mlsterpr0
It just updated to a new version. Looked like llama updated too, but it still doesn't work. Strange... Koboldcpp also won't load the model.
> > comment out all the @torch.inference_mode() (add # before them) on: \ldm_patched\modules\utils.py - line 407 \modules_forge\forge_loader.py - line 236, line 242 > > change "with torch.inference_mode():" for "with torch.no_grad():"...
After so much time i still don't see a PROPER tutorial on how to use forge with AMD in directml mode. It almost works for me, after i did some...
> on cmd :: go to forge folder , call venv\Scripts\activate.bat system cannot find path specified why there has to be venv folder?
> It works with `--chat-template chatglm4` on cli, Still endless repetition
> Also, for those who are interested, [chatllm.cpp](https://github.com/foldl/chatllm.cpp) supports this. Yeah, cool, but how do you run it, if the name of the model is nowhere to find? python chatllm.py...