ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Zimage not loading loras or partially working loras

Open serget2 opened this issue 2 months ago • 2 comments

Custom Node Testing

Expected Behavior

that it loads loras like any normal workflow

Actual Behavior

images are generated but the effect is not seen, the console gives errors like

got prompt lora key not loaded: diffusion_model.layers.0.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.0.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.0.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.0.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.1.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.1.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_q.lora_A.weight

Steps to Reproduce

tried every lora loader I have they all do the same thing but very rarely it just works but otherwise 95% of the time it gives that error, z-image fp 16 the base workflow from you guys just added a lora loader.

Debug Logs

got prompt
Failed to validate prompt for output 62:
* Lora Loader (LoraManager) 54:
  - Required input is missing: model
Output will be ignored
Failed to validate prompt for output 68:
Output will be ignored
Failed to validate prompt for output 61:
Output will be ignored
Failed to validate prompt for output 67:
Output will be ignored
[12/06/25 17:39:56] INFO     PromptTask 93e67376c3d24e6c9871170ff5186220
                             Input: from the information given to you write a detailed and elaborate prompt. Take every
                             bit of information like:
                             - colors used for scene, clothing, objects
                             - Postiion and camera angles
                             - full body, half body, close-up, portrait
                             - frontview,  backview, sideview, fisheye lens, birdview

                             (remove logos and watermarks:1.5), keep the prompt crisp and clean devoid of your comments

                             A full-body shot of a man standing in a dimly lit room, back to the viewer, with a
                             low-angle perspective emphasizing his posture and the space around him. His skin is pale,
                             with a faint sheen under cool, blue-tinged lighting, casting sharp shadows along his
                             collarbones and shoulders. His hair is short and tousled, a mix of dark brown and light
                             chestnut, with a slight gradient fading to silver at the edges. His eyes are narrow,
                             almond-shaped, with dark irises and thick, straight eyebrows. A faint smudge of shadow
                             lingers beneath his left eye, and his lips are parted slightly, revealing a small gap
                             between his teeth. His torso is lean, with defined muscles visible through a tight,
                             sleeveless black shirt that clings to his chest and arms, the fabric slightly wrinkled at
                             the collar. His hands are clasped behind his back, fingers slightly curled, while his right
                             foot is raised slightly, toes pointing outward. The background is a cluttered, industrial
                             space with metallic walls, exposed pipes, and a large window casting a faint glow from the
                             right. A single light source illuminates his face from the upper left, creating a strong
                             highlight on his forehead and a deep shadow across his right cheek. The atmosphere is heavy
                             with a soft haze, with faint reflections on the metallic surfaces suggesting a glossy, wet
                             texture.
HTTP Request: POST http://127.0.0.1:11434/api/chat "HTTP/1.1 200 OK"
[12/06/25 17:40:04] INFO     PromptTask 93e67376c3d24e6c9871170ff5186220
                             Output: A full-body backview of a man in a dimly lit industrial space, low-angle
                             perspective emphasizing posture and surroundings. Pale skin with cool, blue-tinged lighting
                             casts sharp shadows along collarbones and shoulders. Short, tousled hair blends dark brown,
                             chestnut, and silver gradients. Narrow, almond-shaped eyes with dark irises and thick,
                             straight eyebrows; faint shadow under left eye, parted lips revealing a small gap between
                             teeth. Lean torso in a tight, wrinkled black sleeveless shirt clinging to chest and arms.
                             Hands clasped behind back, fingers curled; right foot slightly raised, toes pointing
                             outward. Background features metallic walls, exposed pipes, and a large window casting
                             faint right-side glow. Upper-left light source creates strong forehead highlight and deep
                             shadow across right cheek. Atmosphere: soft haze with glossy, wet reflections on metallic
                             surfaces. No watermarks or logos.
Prompt executed in 7.82 seconds
Prompt executed in 0.03 seconds
Prompt executed in 0.03 seconds
Prompt executed in 0.03 seconds
got prompt
got prompt
lora key not loaded: diffusion_model.layers.0.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.0.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.0.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.0.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.0.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.0.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.0.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.0.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.1.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.1.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.1.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.1.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.1.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.1.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.1.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.1.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.10.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.10.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.10.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.10.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.10.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.10.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.10.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.10.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.11.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.11.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.11.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.11.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.11.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.11.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.11.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.11.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.12.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.12.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.12.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.12.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.12.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.12.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.12.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.12.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.13.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.13.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.13.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.13.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.13.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.13.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.13.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.13.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.14.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.14.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.14.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.14.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.14.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.14.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.14.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.14.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.15.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.15.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.15.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.15.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.15.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.15.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.15.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.15.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.16.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.16.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.16.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.16.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.16.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.16.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.16.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.16.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.17.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.17.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.17.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.17.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.17.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.17.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.17.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.17.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.18.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.18.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.18.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.18.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.18.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.18.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.18.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.18.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.19.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.19.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.19.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.19.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.19.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.19.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.19.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.19.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.2.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.2.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.2.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.2.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.2.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.2.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.2.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.2.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.20.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.20.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.20.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.20.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.20.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.20.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.20.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.20.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.21.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.21.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.21.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.21.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.21.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.21.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.21.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.21.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.22.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.22.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.22.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.22.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.22.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.22.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.22.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.22.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.23.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.23.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.23.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.23.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.23.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.23.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.23.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.23.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.24.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.24.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.24.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.24.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.24.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.24.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.24.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.24.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.25.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.25.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.25.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.25.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.25.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.25.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.25.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.25.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.26.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.26.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.26.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.26.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.26.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.26.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.26.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.26.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.27.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.27.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.27.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.27.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.27.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.27.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.27.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.27.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.28.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.28.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.28.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.28.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.28.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.28.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.28.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.28.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.29.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.29.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.29.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.29.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.29.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.29.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.29.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.29.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.3.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.3.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.3.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.3.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.3.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.3.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.3.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.3.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.4.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.4.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.4.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.4.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.4.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.4.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.4.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.4.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.5.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.5.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.5.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.5.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.5.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.5.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.5.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.5.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.6.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.6.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.6.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.6.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.6.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.6.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.6.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.6.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.7.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.7.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.7.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.7.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.7.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.7.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.7.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.7.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.8.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.8.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.8.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.8.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.8.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.8.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.8.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.8.attention.to_v.lora_B.weight
lora key not loaded: diffusion_model.layers.9.attention.to_k.lora_A.weight
lora key not loaded: diffusion_model.layers.9.attention.to_k.lora_B.weight
lora key not loaded: diffusion_model.layers.9.attention.to_out.0.lora_A.weight
lora key not loaded: diffusion_model.layers.9.attention.to_out.0.lora_B.weight
lora key not loaded: diffusion_model.layers.9.attention.to_q.lora_A.weight
lora key not loaded: diffusion_model.layers.9.attention.to_q.lora_B.weight
lora key not loaded: diffusion_model.layers.9.attention.to_v.lora_A.weight
lora key not loaded: diffusion_model.layers.9.attention.to_v.lora_B.weight
Requested to load ZImageTEModel_
got prompt
got prompt
loaded completely; 11445.72 MB usable, 7672.25 MB loaded, full load: True
Requested to load Lumina2
loaded partially: 5301.88 MB loaded, lowvram patches: 0
loaded completely; 7904.54 MB usable, 6568.30 MB loaded, full load: True
100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [03:52<00:00, 33.20s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00,  1.33it/s]
Requested to load AutoencodingEngine
loaded partially: 0.00 MB loaded, lowvram patches: 0
loaded completely; 939.68 MB usable, 159.87 MB loaded, full load: True
Prompt executed in 254.74 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00,  1.27it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00,  1.31it/s]
Prompt executed in 10.71 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00,  1.23it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00,  1.31it/s]
Prompt executed in 10.90 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00,  1.17it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:04<00:00,  1.15it/s]
Prompt executed in 11.95 seconds
Creating LoraMetadata for BRZRKR_Style_r2.safetensors
Updating lora cache for D:/ComfyUI_windows_portable/ComfyUI/models/loras/ill/BRZRKR_Style_r2.safetensors

Other

No response

serget2 avatar Dec 06 '25 20:12 serget2

have you downloaded the proper model for it ??

Vijay2359 avatar Dec 10 '25 09:12 Vijay2359

Pruebas de nodos personalizados

* [x] He intentado deshabilitar los nodos personalizados y el problema persiste (consulte [ cómo deshabilitar los nodos personalizados ](https://docs.comfy.org/troubleshooting/custom-node-issues#step-1%3A-test-with-all-custom-nodes-disabled) si necesita ayuda)

Comportamiento esperado

que carga loras como cualquier flujo de trabajo normal

Comportamiento real

se generan imagenes pero no se ve el efecto, la consola da errores como

Recibí un aviso Clave de Lora no cargada: diffusion_model.layers.0.attention.to_k.lora_A.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_k.lora_B.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_out.0.lora_A.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_out.0.lora_B.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_q.lora_A.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_q.lora_B.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_v.lora_A.weight Clave de Lora no cargada: diffusion_model.layers.0.attention.to_v.lora_B.weight Clave de Lora no cargada: diffusion_model.layers.1.attention.to_k.lora_A.weight Clave de Lora no cargada: diffusion_model.layers.1.attention.to_k.lora_B.weight Clave de Lora no cargada: diffusion_model.layers.1.attention.to_out.0.lora_A.weight Clave de Lora no cargada: diffusion_model.layers.1.attention.to_out.0.lora_B.weight Clave de Lora no cargada: diffusion_model.layers.1.attention.to_q.lora_A.weight

Pasos para reproducir

tried every lora loader I have they all do the same thing but very rarely it just works but otherwise 95% of the time it gives that error, z-image fp 16 the base workflow from you guys just added a lora loader.

Debug Logs

got prompt Failed to validate prompt for output 62:

  • Lora Loader (LoraManager) 54:
    • Required input is missing: model Output will be ignored Failed to validate prompt for output 68: Output will be ignored Failed to validate prompt for output 61: Output will be ignored Failed to validate prompt for output 67: Output will be ignored [12/06/25 17:39:56] INFO PromptTask 93e67376c3d24e6c9871170ff5186220 Input: from the information given to you write a detailed and elaborate prompt. Take every bit of information like: - colors used for scene, clothing, objects - Postiion and camera angles - full body, half body, close-up, portrait - frontview, backview, sideview, fisheye lens, birdview

                           (remove logos and watermarks:1.5), keep the prompt crisp and clean devoid of your comments
      
                           A full-body shot of a man standing in a dimly lit room, back to the viewer, with a
                           low-angle perspective emphasizing his posture and the space around him. His skin is pale,
                           with a faint sheen under cool, blue-tinged lighting, casting sharp shadows along his
                           collarbones and shoulders. His hair is short and tousled, a mix of dark brown and light
                           chestnut, with a slight gradient fading to silver at the edges. His eyes are narrow,
                           almond-shaped, with dark irises and thick, straight eyebrows. A faint smudge of shadow
                           lingers beneath his left eye, and his lips are parted slightly, revealing a small gap
                           between his teeth. His torso is lean, with defined muscles visible through a tight,
                           sleeveless black shirt that clings to his chest and arms, the fabric slightly wrinkled at
                           the collar. His hands are clasped behind his back, fingers slightly curled, while his right
                           foot is raised slightly, toes pointing outward. The background is a cluttered, industrial
                           space with metallic walls, exposed pipes, and a large window casting a faint glow from the
                           right. A single light source illuminates his face from the upper left, creating a strong
                           highlight on his forehead and a deep shadow across his right cheek. The atmosphere is heavy
                           with a soft haze, with faint reflections on the metallic surfaces suggesting a glossy, wet
                           texture.
      

HTTP Request: POST http://127.0.0.1:11434/api/chat "HTTP/1.1 200 OK" [12/06/25 17:40:04] INFO PromptTask 93e67376c3d24e6c9871170ff5186220 Output: A full-body backview of a man in a dimly lit industrial space, low-angle perspective emphasizing posture and surroundings. Pale skin with cool, blue-tinged lighting casts sharp shadows along collarbones and shoulders. Short, tousled hair blends dark brown, chestnut, and silver gradients. Narrow, almond-shaped eyes with dark irises and thick, straight eyebrows; faint shadow under left eye, parted lips revealing a small gap between teeth. Lean torso in a tight, wrinkled black sleeveless shirt clinging to chest and arms. Hands clasped behind back, fingers curled; right foot slightly raised, toes pointing outward. Background features metallic walls, exposed pipes, and a large window casting faint right-side glow. Upper-left light source creates strong forehead highlight and deep shadow across right cheek. Atmosphere: soft haze with glossy, wet reflections on metallic surfaces. No watermarks or logos. Prompt executed in 7.82 seconds Prompt executed in 0.03 seconds Prompt executed in 0.03 seconds Prompt executed in 0.03 seconds got prompt got prompt lora key not loaded: diffusion_model.layers.0.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.0.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.0.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.0.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.0.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.1.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.1.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.1.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.1.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.1.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.10.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.10.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.10.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.10.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.10.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.10.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.10.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.10.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.11.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.11.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.11.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.11.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.11.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.11.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.11.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.11.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.12.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.12.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.12.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.12.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.12.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.12.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.12.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.12.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.13.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.13.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.13.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.13.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.13.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.13.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.13.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.13.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.14.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.14.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.14.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.14.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.14.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.14.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.14.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.14.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.15.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.15.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.15.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.15.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.15.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.15.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.15.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.15.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.16.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.16.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.16.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.16.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.16.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.16.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.16.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.16.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.17.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.17.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.17.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.17.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.17.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.17.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.17.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.17.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.18.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.18.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.18.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.18.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.18.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.18.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.18.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.18.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.19.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.19.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.19.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.19.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.19.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.19.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.19.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.19.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.2.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.2.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.2.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.2.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.2.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.2.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.2.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.2.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.20.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.20.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.20.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.20.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.20.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.20.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.20.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.20.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.21.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.21.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.21.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.21.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.21.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.21.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.21.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.21.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.22.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.22.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.22.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.22.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.22.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.22.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.22.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.22.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.23.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.23.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.23.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.23.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.23.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.23.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.23.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.23.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.24.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.24.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.24.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.24.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.24.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.24.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.24.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.24.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.25.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.25.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.25.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.25.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.25.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.25.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.25.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.25.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.26.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.26.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.26.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.26.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.26.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.26.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.26.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.26.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.27.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.27.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.27.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.27.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.27.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.27.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.27.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.27.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.28.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.28.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.28.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.28.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.28.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.28.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.28.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.28.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.29.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.29.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.29.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.29.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.29.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.29.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.29.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.29.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.3.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.3.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.3.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.3.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.3.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.3.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.3.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.3.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.4.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.4.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.4.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.4.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.4.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.4.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.4.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.4.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.5.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.5.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.5.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.5.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.5.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.5.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.5.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.5.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.6.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.6.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.6.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.6.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.6.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.6.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.6.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.6.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.7.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.7.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.7.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.7.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.7.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.7.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.7.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.7.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.8.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.8.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.8.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.8.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.8.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.8.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.8.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.8.attention.to_v.lora_B.weight lora key not loaded: diffusion_model.layers.9.attention.to_k.lora_A.weight lora key not loaded: diffusion_model.layers.9.attention.to_k.lora_B.weight lora key not loaded: diffusion_model.layers.9.attention.to_out.0.lora_A.weight lora key not loaded: diffusion_model.layers.9.attention.to_out.0.lora_B.weight lora key not loaded: diffusion_model.layers.9.attention.to_q.lora_A.weight lora key not loaded: diffusion_model.layers.9.attention.to_q.lora_B.weight lora key not loaded: diffusion_model.layers.9.attention.to_v.lora_A.weight lora key not loaded: diffusion_model.layers.9.attention.to_v.lora_B.weight Requested to load ZImageTEModel_ got prompt got prompt loaded completely; 11445.72 MB usable, 7672.25 MB loaded, full load: True Requested to load Lumina2 loaded partially: 5301.88 MB loaded, lowvram patches: 0 loaded completely; 7904.54 MB usable, 6568.30 MB loaded, full load: True 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [03:52<00:00, 33.20s/it] 100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.33it/s] Requested to load AutoencodingEngine loaded partially: 0.00 MB loaded, lowvram patches: 0 loaded completely; 939.68 MB usable, 159.87 MB loaded, full load: True Prompt executed in 254.74 seconds 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.27it/s] 100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.31it/s] Prompt executed in 10.71 seconds 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.23it/s] 100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:03<00:00, 1.31it/s] Prompt executed in 10.90 seconds 100%|████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.17it/s] 100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:04<00:00, 1.15it/s] Prompt executed in 11.95 seconds Creating LoraMetadata for BRZRKR_Style_r2.safetensors Updating lora cache for D:/ComfyUI_windows_portable/ComfyUI/models/loras/ill/BRZRKR_Style_r2.safetensors

Otro

_Sin respuesta _

same https://github.com/kohya-ss/musubi-tuner/issues/807

peepeepeepoopoopoo avatar Dec 24 '25 05:12 peepeepeepoopoopoo