Comfyui + Flux + Lora = Noise after only a few render
Expected Behavior
When adding a Lora in a basic Flux Workflow, we should be able to render more then one good image. When no lora is selected in the Lora loader or there is no lora loader, everything works fine.
Actual Behavior
When adding a Lora in a basic Flux workflow, only the first render is good.
Steps to Reproduce
Use a basic Flux workflow, add a lora in a lora loader model only and generate a few images.
Debug Logs
## ComfyUI-Manager: installing dependencies done.
[2024-08-22 14:06] ** ComfyUI startup time: 2024-08-22 14:06:55.718765
[2024-08-22 14:06] ** Platform: Windows
[2024-08-22 14:06] ** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
[2024-08-22 14:06] ** Python executable: F:\AIgenerator\ComfyUI_windows_portable\python_embeded\python.exe
[2024-08-22 14:06] ** ComfyUI Path: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI
[2024-08-22 14:06] ** Log path: F:\AIgenerator\ComfyUI_windows_portable\comfyui.log
[2024-08-22 14:06]
Prestartup times for custom nodes:
[2024-08-22 14:06] 0.0 seconds: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
[2024-08-22 14:06] 1.6 seconds: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-08-22 14:06]
Total VRAM 24564 MB, total RAM 65277 MB
[2024-08-22 14:06] pytorch version: 2.3.1+cu121
[2024-08-22 14:06] Set vram state to: NORMAL_VRAM
[2024-08-22 14:06] Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
[2024-08-22 14:07] Using pytorch cross attention
[2024-08-22 14:07] [Prompt Server] web root: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\web
[2024-08-22 14:07] Adding extra search path checkpoints F:\Comfyui\ComfyUI\models\checkpoints\
[2024-08-22 14:07] Adding extra search path vae F:\Comfyui\ComfyUI\models\vae\
[2024-08-22 14:07] Adding extra search path loras F:\Comfyui\ComfyUI\models\loras\
[2024-08-22 14:07] Adding extra search path clip F:\Comfyui\ComfyUI\models\clip\
[2024-08-22 14:07] Adding extra search path unet F:\Comfyui\ComfyUI\models\unet\
[2024-08-22 14:07] ### Loading: ComfyUI-Manager (V2.50.1)
[2024-08-22 14:07] ### ComfyUI Revision: 2597 [dafbe321] | Released on '2024-08-21'
[2024-08-22 14:07]
[2024-08-22 14:07] [92m[rgthree] Loaded 42 epic nodes.[00m[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2024-08-22 14:07]
[2024-08-22 14:07] [33m[rgthree] NOTE: Will NOT use rgthree's optimized recursive execution as ComfyUI has changed.[00m
[2024-08-22 14:07]
[2024-08-22 14:07]
Import times for custom nodes:
[2024-08-22 14:07] 0.0 seconds: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
[2024-08-22 14:07] 0.1 seconds: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
[2024-08-22 14:07] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json[2024-08-22 14:07] 0.2 seconds: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-08-22 14:07]
[2024-08-22 14:07] Starting server
[2024-08-22 14:07] To see the GUI go to: http://127.0.0.1:8188
[2024-08-22 14:07] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2024-08-22 14:07] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[2024-08-22 14:07] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2024-08-22 14:07] FETCH DATA from: F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[2024-08-22 14:08] got prompt
[2024-08-22 14:08] Using pytorch attention in VAE
[2024-08-22 14:08] Using pytorch attention in VAE
[2024-08-22 14:08] model weight dtype torch.bfloat16, manual cast: None
[2024-08-22 14:08] model_type FLUX
[2024-08-22 14:09] Requested to load FluxClipModel_
[2024-08-22 14:09] Loading 1 new model
[2024-08-22 14:09] loaded completely 0.0 9319.23095703125 True
[2024-08-22 14:09] clip missing: ['text_projection.weight']
[2024-08-22 14:09] F:\AIgenerator\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
[2024-08-22 14:09] Requested to load Flux
[2024-08-22 14:09] Loading 1 new model
[2024-08-22 14:10] loaded partially 21484.371 21475.816528320312 14
[2024-08-22 14:10]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
[2024-08-22 14:10] Requested to load AutoencodingEngine
[2024-08-22 14:10] Loading 1 new model
[2024-08-22 14:10] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:10] Prompt executed in 90.63 seconds
[2024-08-22 14:10] got prompt
[2024-08-22 14:10] Requested to load FluxClipModel_
[2024-08-22 14:10] Loading 1 new model
[2024-08-22 14:10] loaded completely 0.0 9319.23095703125 True
[2024-08-22 14:10] loaded partially 21468.371 21439.804809570312 0
[2024-08-22 14:11]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.23it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
[2024-08-22 14:11] Requested to load AutoencodingEngine
[2024-08-22 14:11] Loading 1 new model
[2024-08-22 14:11] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:11] Prompt executed in 26.75 seconds
[2024-08-22 14:11] got prompt
[2024-08-22 14:11] loaded partially 21145.2694375 21133.705200195312 3
[2024-08-22 14:11]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
[2024-08-22 14:11] Requested to load AutoencodingEngine
[2024-08-22 14:11] Loading 1 new model
[2024-08-22 14:11] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:11] Prompt executed in 21.93 seconds
[2024-08-22 14:12] got prompt
[2024-08-22 14:12] Requested to load FluxClipModel_
[2024-08-22 14:12] Loading 1 new model
[2024-08-22 14:12] loaded completely 0.0 9319.23095703125 True
[2024-08-22 14:12] loaded partially 21162.271390625 21133.705200195312 0
[2024-08-22 14:12]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
[2024-08-22 14:12] Requested to load AutoencodingEngine
[2024-08-22 14:12] Loading 1 new model
[2024-08-22 14:12] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:12] Prompt executed in 27.33 seconds
[2024-08-22 14:12] got prompt
[2024-08-22 14:12] loaded partially 21012.31240625 20989.681762695312 2
[2024-08-22 14:12]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.20it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
[2024-08-22 14:12] Requested to load AutoencodingEngine
[2024-08-22 14:12] Loading 1 new model
[2024-08-22 14:12] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:13] Prompt executed in 21.99 seconds
[2024-08-22 14:13] got prompt
[2024-08-22 14:13] Requested to load Flux
[2024-08-22 14:13] Loading 1 new model
[2024-08-22 14:13] loaded partially 21468.371 21439.804809570312 14
[2024-08-22 14:13]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
[2024-08-22 14:13] Requested to load AutoencodingEngine
[2024-08-22 14:13] Loading 1 new model
[2024-08-22 14:13] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:13] Prompt executed in 25.42 seconds
[2024-08-22 14:13] got prompt
[2024-08-22 14:13] loaded partially 21168.83584375 21133.705200195312 3
[2024-08-22 14:14]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
[2024-08-22 14:14] Requested to load AutoencodingEngine
[2024-08-22 14:14] Loading 1 new model
[2024-08-22 14:14] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:14] Prompt executed in 21.86 seconds
[2024-08-22 14:14] got prompt
[2024-08-22 14:14] loaded partially 20852.622953125 20827.652465820312 3
[2024-08-22 14:14]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
[2024-08-22 14:14] Requested to load AutoencodingEngine
[2024-08-22 14:14] Loading 1 new model
[2024-08-22 14:14] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:14] Prompt executed in 21.97 seconds
[2024-08-22 14:14] got prompt
[2024-08-22 14:14] loaded partially 20565.50771875 20557.588012695312 3
[2024-08-22 14:15]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.19it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.20it/s]
[2024-08-22 14:15] Requested to load AutoencodingEngine
[2024-08-22 14:15] Loading 1 new model
[2024-08-22 14:15] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:15] Prompt executed in 22.09 seconds
[2024-08-22 14:15] got prompt
[2024-08-22 14:15] Requested to load Flux
[2024-08-22 14:15] Loading 1 new model
[2024-08-22 14:15] loaded partially 21468.371 21439.804809570312 14
[2024-08-22 14:15]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.22it/s]
[2024-08-22 14:15] Requested to load AutoencodingEngine
[2024-08-22 14:15] Loading 1 new model
[2024-08-22 14:15] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:15] Prompt executed in 24.85 seconds
[2024-08-22 14:15] got prompt
[2024-08-22 14:15] loaded partially 21199.33584375 21169.740356445312 3
[2024-08-22 14:16]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.15it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.20it/s]
[2024-08-22 14:16] Requested to load AutoencodingEngine
[2024-08-22 14:16] Loading 1 new model
[2024-08-22 14:16] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:16] Prompt executed in 22.00 seconds
[2024-08-22 14:16] got prompt
[2024-08-22 14:16] loaded partially 20933.271390625 20899.675903320312 3
[2024-08-22 14:16]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
[2024-08-22 14:16] Requested to load AutoencodingEngine
[2024-08-22 14:16] Loading 1 new model
[2024-08-22 14:16] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:16] Prompt executed in 21.84 seconds
[2024-08-22 14:16] got prompt
[2024-08-22 14:16] loaded partially 20635.59365625 20629.611450195312 3
[2024-08-22 14:17]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.20it/s]
[2024-08-22 14:17] Requested to load AutoencodingEngine
[2024-08-22 14:17] Loading 1 new model
[2024-08-22 14:17] loaded completely 0.0 159.87335777282715 True
[2024-08-22 14:17] Prompt executed in 22.01 seconds
Other
Here is an example using the Gandalf lora from Vivitai:
I’m experiencing this as well (using a 4090 and up-to-date ComfyUI), and there are some more Reddit users discussing this here.
Known Workarounds (to help mitigate and debug the issue):
- Changing the LORA weight between every generation (e.g. “1.0” to “0.99” and back, etc.)
- If using the Node Manager, opening it and clicking “Unload models” between every generation
- Closing/reopening ComfyUI between every generation
Thanks!
Personally, I reverted Comfyui to an old commit to make it work with loras and I have no issue since then.
Might be because of https://github.com/comfyanonymous/ComfyUI/commit/538cb068bc10c8eec3fc2884f0b79c71a3c0b75a which is fixed in https://github.com/comfyanonymous/ComfyUI/commit/c7ee4b37a1e1d91496bba34c246485c3c2c7393a.