morganavr
morganavr
+1, totally agree with @gibsonfan2332
call python server.py --auto-devices --extensions api --wbits 4 --groupsize 128 --pre_layer 35 --gpu-memory 7 --model-menu This parameters work for me but model generates only 1 token/second. I have RTX 2080...
I've got the same error after `git clone` and running `>webui-user.bat`. auto1111 works fine on my PC (RTX 2080, i9 9900k, 32GB ram) RuntimeError: Torch is not able to use...
I fixed the issue by updating nvidia drivers
> What is the use of getting intermediate images? Like is it to optimize ddim_steps or see how the image evolves over time? Yes, for these two reasons you described....
> There is an extension for VSCode : [Code GPT](https://codegpt.co/) which has this ability in part... you have to highlight a section of code for it to edit and/or refactor....
This project can help thousands of 3D artists to produce content faster. Amazing work!
Below is a difference between two commits mentioned in first 2 sentences - a seed and all other settings were the same.  
> I had this happen to me when I tried to use my lora on a Q8 GGUF flux. Fixed it by changing `Diffusion in low bits` to `Automatic (fp16...
> Someone please remind me if I forget this. @lllyasviel Reminder.