Yaruze66
Yaruze66
There's also a post on [Reddit](https://www.reddit.com/r/StableDiffusion/comments/1ato4mw/attention_webuiforge_users_adetailer_is_currently/?utm_source=share&utm_medium=web2x&context=3) about this issue.
@comfyanonymous I don't know, I didn't notice an increase in memory load 🤔
Alright, got it, thanks! So, just to confirm, I don’t need to adjust any settings in Special K or disable anything in the FFXVIFix config—there’s no conflict between them, right?...
Will precompiled xFormers be available for PyTorch 2.4.1?
I was unable to successfully convert and use these [VAEs](https://civitai.com/models/152040/xlvaec). Initially, when I add them, they are defined as SD 1.x, and I manually change them to SDXL. However, I...
I conducted additional tests using SDXL models in Invoke (versions 5.6.0 and 5.6.1rc1), as Flux models failed to generate any output. Below are my findings compared to ComfyUI: - At...
@ciriguaya Actually, I think the developers are not interested in optimizing and popularizing InvokeAI. There are a couple of similar issues here, and after months, there’s still no response from...
I can also confirm this issue. SDXL generations consume more VRAM and overall memory than ComfyUI, and Invoke even slightly slower. Moreover, attempts to upscale or run img2img at higher...