davizca
davizca
Thanks a lot! Will be very handy to have dynamic weighting in LORA's in Forge!
Same here. Reminds me of IP adapter on LORA's or imageremix.
Superb idea.
Sounded easier when I thought of it but I didnt think about the 23.5 GB VRAM you need on LLM single alone when region prompting in the areas... will be...
Hi @RuoyiDu, thanks for the answer. I did view batch size to 16 and for a 2048x2048 on phase 2 decoding is giving me like +12 mins (and it's increasing...
Hi @RuoyiDu No, I'm using desktop PC with Wiindows and RTX 3090. Nvidia-smi says 190 average W. And Board Draw Power (power) the same. Peaking 23.6 GB VRAM inferencing a...
EI hi. Thanks everyone for checking into this. Currently I'm not at home but on Monday will try the fix. Its weird the difference in inferencing times of @RuoyiDu and...
@RuoyiDu With multidecoder = TRUE (normal settings, 2048x2048): ### Phase 1 Denoising ### 100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:39
> it needs a lot of memory to quantise So this is a error which can be solved by installing more RAM in the computer? or it's VRAM related?
Im having the same problem on WSL: RuntimeError: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so...