ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

XlabsSampler. Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

Open Djon253 opened this issue 1 year ago • 3 comments

Expected Behavior

Run the inference

Actual Behavior

The error appears at the moment of XlabsSampler processing

Steps to Reproduce

1

Debug Logs

# ComfyUI Error Report
## Error Details
- **Node Type:** XlabsSampler
- **Exception Type:** TypeError
- **Exception Message:** Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
## Stack Trace

  File "/Users/osborn/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/Users/osborn/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/Users/osborn/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/Users/osborn/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 354, in sampling
    x = denoise(

  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise
    pred = model_forward(

  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)

  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

  File "/Users/osborn/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)

  File "/Users/osborn/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)

  File "/Users/osborn/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)

  File "/Users/osborn/ComfyUI/comfy/model_management.py", line 851, in cast_to
    return weight.to(dtype=dtype, copy=copy)

System Information

  • ComfyUI Version: v0.2.7-6-g2865f91
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.10.15 (main, Oct 3 2024, 02:24:49) [Clang 14.0.6 ]
  • Embedded Python: false
  • PyTorch Version: 2.3.1

Devices

  • Name: mps
    • Type: mps
    • VRAM Total: 68719476736
    • VRAM Free: 24615649280
    • Torch VRAM Total: 68719476736
    • Torch VRAM Free: 24615649280

Logs

2024-11-08 17:43:36,324 - root - INFO - Total VRAM 65536 MB, total RAM 65536 MB
2024-11-08 17:43:36,324 - root - INFO - pytorch version: 2.3.1
2024-11-08 17:43:36,324 - root - INFO - Set vram state to: SHARED
2024-11-08 17:43:36,324 - root - INFO - Device: mps
2024-11-08 17:43:36,860 - root - INFO - Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-11-08 17:43:37,466 - root - INFO - [Prompt Server] web root: /Users/osborn/ComfyUI/web
2024-11-08 17:43:46,618 - root - INFO - --------------
2024-11-08 17:43:46,618 - root - INFO - [91m ### Mixlab Nodes: [93mLoaded
2024-11-08 17:43:46,623 - root - INFO - ChatGPT.available True
2024-11-08 17:43:46,624 - root - INFO - edit_mask.available True
2024-11-08 17:43:46,834 - root - INFO - ClipInterrogator.available True
2024-11-08 17:43:46,930 - root - INFO - PromptGenerate.available True
2024-11-08 17:43:46,930 - root - INFO - ChinesePrompt.available True
2024-11-08 17:43:46,930 - root - INFO - RembgNode_.available False
2024-11-08 17:43:47,176 - root - INFO - TripoSR.available
2024-11-08 17:43:47,176 - root - INFO - MiniCPMNode.available
2024-11-08 17:43:47,195 - root - INFO - Scenedetect.available
2024-11-08 17:43:47,196 - root - INFO - FishSpeech.available False
2024-11-08 17:43:47,202 - root - INFO - SenseVoice.available
2024-11-08 17:43:47,215 - root - INFO - Whisper.available False
2024-11-08 17:43:47,219 - root - INFO - FalVideo.available
2024-11-08 17:43:47,219 - root - INFO - [93m -------------- [0m
2024-11-08 17:43:49,953 - root - INFO - 
Import times for custom nodes:
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/websocket_image_save.py
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/rgthree-comfy
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI-GGUF
2024-11-08 17:43:49,953 - root - INFO -    0.0 seconds: /Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui
2024-11-08 17:43:49,953 - root - INFO -    0.1 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI-Manager
2024-11-08 17:43:49,953 - root - INFO -    2.7 seconds: /Users/osborn/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux
2024-11-08 17:43:49,953 - root - INFO -    9.5 seconds: /Users/osborn/ComfyUI/custom_nodes/comfyui-mixlab-nodes
2024-11-08 17:43:49,953 - root - INFO - 
2024-11-08 17:43:49,958 - root - INFO - Starting server

2024-11-08 17:43:49,958 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-11-08 17:44:20,104 - root - INFO - got prompt
2024-11-08 17:44:20,139 - root - INFO - Using split attention in VAE
2024-11-08 17:44:20,140 - root - INFO - Using split attention in VAE
2024-11-08 17:44:20,382 - root - INFO - Requested to load FluxClipModel_
2024-11-08 17:44:20,382 - root - INFO - Loading 1 new model
2024-11-08 17:44:20,387 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-08 17:44:20,473 - root - WARNING - clip missing: ['text_projection.weight']
2024-11-08 17:44:28,011 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-08 17:44:28,011 - root - INFO - model_type FLUX
2024-11-08 17:45:22,170 - root - INFO - Requested to load Flux
2024-11-08 17:45:22,170 - root - INFO - Loading 1 new model
2024-11-08 17:45:22,185 - root - INFO - loaded completely 0.0 11350.048889160156 True
2024-11-08 17:45:22,379 - root - ERROR - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2024-11-08 17:45:22,383 - root - ERROR - Traceback (most recent call last):
  File "/Users/osborn/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/osborn/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 354, in sampling
    x = denoise(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise
    pred = model_forward(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
  File "/Users/osborn/ComfyUI/comfy/model_management.py", line 851, in cast_to
    return weight.to(dtype=dtype, copy=copy)
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

2024-11-08 17:45:22,383 - root - INFO - Prompt executed in 62.28 seconds
2024-11-08 17:51:44,705 - root - INFO - got prompt
2024-11-08 17:51:44,811 - root - ERROR - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2024-11-08 17:51:44,812 - root - ERROR - Traceback (most recent call last):
  File "/Users/osborn/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/osborn/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/osborn/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 354, in sampling
    x = denoise(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise
    pred = model_forward(
  File "/Users/osborn/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 28, in model_forward
    img = model.img_in(img)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/osborn/miniconda3/envs/comfyui_1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File "/Users/osborn/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
  File "/Users/osborn/ComfyUI/comfy/model_management.py", line 851, in cast_to
    return weight.to(dtype=dtype, copy=copy)
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

2024-11-08 17:51:44,812 - root - INFO - Prompt executed in 0.10 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":24,"last_link_id":42,"nodes":[{"id":7,"type":"VAEDecode","pos":{"0":1371,"1":152},"size":{"0":210,"1":46},"flags":{},"order":8,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":42,"slot_index":0},{"name":"vae","type":"VAE","link":7}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[31],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":19,"type":"CLIPTextEncodeFlux","pos":{"0":97,"1":123},"size":{"0":400,"1":200},"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":27,"slot_index":0}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[40],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["","",4]},{"id":10,"type":"UNETLoader","pos":{"0":209,"1":387},"size":{"0":315,"1":82},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[36],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-dev.safetensors","fp8_e4m3fn"]},{"id":23,"type":"FluxLoraLoader","pos":{"0":506,"1":231},"size":{"0":315,"1":82},"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":36}],"outputs":[{"name":"MODEL","type":"MODEL","links":[38],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"FluxLoraLoader"},"widgets_values":["furry_lora.safetensors",1]},{"id":5,"type":"CLIPTextEncodeFlux","pos":{"0":518,"1":-63},"size":{"0":400,"1":200},"flags":{},"order":5,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":2,"slot_index":0}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[39],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["furry in the city with text \"hello world\"","furry in the city with text \"hello world\"",3.5]},{"id":8,"type":"VAELoader","pos":{"0":1102,"1":48},"size":{"0":315,"1":58},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[7],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.safetensors"]},{"id":21,"type":"PreviewImage","pos":{"0":1612,"1":128},"size":{"0":364.77178955078125,"1":527.6837158203125},"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":31,"slot_index":0}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":6,"type":"EmptyLatentImage","pos":{"0":626,"1":428},"size":{"0":315,"1":106},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[41],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,2]},{"id":24,"type":"XlabsSampler","pos":{"0":1013,"1":169},"size":{"0":342.5999755859375,"1":282},"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":38},{"name":"conditioning","type":"CONDITIONING","link":39},{"name":"neg_conditioning","type":"CONDITIONING","link":40},{"name":"latent_image","type":"LATENT","link":41,"shape":7},{"name":"controlnet_condition","type":"ControlNetCondition","link":null,"shape":7}],"outputs":[{"name":"latent","type":"LATENT","links":[42]}],"properties":{"Node name for S&R":"XlabsSampler"},"widgets_values":[600258048956591,"randomize",20,20,3,0,1]},{"id":4,"type":"DualCLIPLoader","pos":{"0":121,"1":-111},"size":{"0":315,"1":106},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[2,27],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l.safetensors","t5xxl_fp8_e4m3fn.safetensors","flux"]}],"links":[[2,4,0,5,0,"CLIP"],[7,8,0,7,1,"VAE"],[27,4,0,19,0,"CLIP"],[31,7,0,21,0,"IMAGE"],[36,10,0,23,0,"MODEL"],[38,23,0,24,0,"MODEL"],[39,5,0,24,1,"CONDITIONING"],[40,19,0,24,2,"CONDITIONING"],[41,6,0,24,3,"LATENT"],[42,24,0,7,0,"LATENT"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9646149645000006,"offset":[-113.06857937189307,125.66804176669243]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)



### Other
I have a MacBook PRO M2 Max Silicon 64gb. I installed ComfyUI and then Flux. However, when I run the model I get this error. I am a beginner. I see that many people have this problem. Please help me to solve it.

Djon253 avatar Nov 08 '24 14:11 Djon253

I have a MacBook PRO M2 Max Silicon 64gb. I installed ComfyUI and then Flux. However, when I run the model I get this error. I am a beginner. I see that many people have this problem. Please help me to solve it.

Djon253 avatar Nov 08 '24 15:11 Djon253

I am facing the same problem when i tried to upscale an image, its the SUPIR sampler which is showing the error, i have a macbook pro M3 pro with 18 gb ram

Anant-Raj17 avatar Nov 11 '24 18:11 Anant-Raj17

I got to this after chatting with chatgpt: PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-fp32 --fp32-unet --fp32-vae --fp32-text-enc PYTORCH_ENABLE_MPS_FALLBACK=1 python main.py --force-fp16 --fp16-unet --fp16-vae --fp32-text-enc --reserve-vram 2 the error disappeared. However, instead of generating a picture, a black screen appeared. Haven't managed it yet. Looking for a way out.

Djon253 avatar Nov 11 '24 18:11 Djon253

I had the same issue. I deleted the node (SamplerCustomAdvanced) causing issues, replaced with the same one (but new) and it worked.

Technicology avatar Dec 02 '24 01:12 Technicology

This my log

ComfyUI Error Report

Error Details

  • Node ID: 3
  • Node Type: KSampler
  • Exception Type: TypeError
  • Exception Message: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

Stack Trace

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1502, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1469, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1013, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 911, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 897, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 866, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 850, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 707, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/k_diffusion/sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 379, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 832, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 835, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 359, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 195, in calc_cond_batch
    return executor.execute(model, conds, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 308, in _calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 129, in apply_model
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 158, in _apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/flux/model.py", line 204, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options, attn_mask=kwargs.get("attention_mask", None))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/flux/model.py", line 109, in forward_orig
    img = self.img_in(img)
          ^^^^^^^^^^^^^^^^

  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 830, in cast_to
    return weight.to(dtype=dtype, copy=copy)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • ComfyUI Version: unknown
  • Arguments: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/main.py --user-directory /Volumes/DataFiles/AI/ComfyUI/user --input-directory /Volumes/DataFiles/AI/ComfyUI/input --output-directory /Volumes/DataFiles/AI/ComfyUI/output --front-end-root /Applications/ComfyUI.app/Contents/Resources/ComfyUI/web_custom_versions/desktop_app --extra-model-paths-config /Users/kuder/Library/Application Support/ComfyUI/extra_models_config.yaml --port 8000 --listen 127.0.0.1
  • OS: posix
  • Python Version: 3.12.4 (main, Jul 25 2024, 22:11:22) [Clang 18.1.8 ]
  • Embedded Python: false
  • PyTorch Version: 2.6.0.dev20241219

Devices

  • Name: mps
    • Type: mps
    • VRAM Total: 17179869184
    • VRAM Free: 2307768320
    • Torch VRAM Total: 17179869184
    • Torch VRAM Free: 2307768320

Logs

2024-12-20T12:14:26.425484 - Adding extra search path checkpoints /Volumes/DataFiles/AI/ComfyUI/models/checkpoints/
2024-12-20T12:14:26.425538 - Adding extra search path classifiers /Volumes/DataFiles/AI/ComfyUI/models/classifiers/
2024-12-20T12:14:26.425558 - Adding extra search path clip /Volumes/DataFiles/AI/ComfyUI/models/clip/
2024-12-20T12:14:26.425573 - Adding extra search path clip_vision /Volumes/DataFiles/AI/ComfyUI/models/clip_vision/
2024-12-20T12:14:26.425584 - Adding extra search path configs /Volumes/DataFiles/AI/ComfyUI/models/configs/
2024-12-20T12:14:26.425594 - Adding extra search path controlnet /Volumes/DataFiles/AI/ComfyUI/models/controlnet/
2024-12-20T12:14:26.425603 - Adding extra search path diffusers /Volumes/DataFiles/AI/ComfyUI/models/diffusers/
2024-12-20T12:14:26.425611 - Adding extra search path diffusion_models /Volumes/DataFiles/AI/ComfyUI/models/diffusion_models/
2024-12-20T12:14:26.425619 - Adding extra search path embeddings /Volumes/DataFiles/AI/ComfyUI/models/embeddings/
2024-12-20T12:14:26.425627 - Adding extra search path gligen /Volumes/DataFiles/AI/ComfyUI/models/gligen/
2024-12-20T12:14:26.425635 - Adding extra search path hypernetworks /Volumes/DataFiles/AI/ComfyUI/models/hypernetworks/
2024-12-20T12:14:26.425644 - Adding extra search path loras /Volumes/DataFiles/AI/ComfyUI/models/loras/
2024-12-20T12:14:26.425652 - Adding extra search path photomaker /Volumes/DataFiles/AI/ComfyUI/models/photomaker/
2024-12-20T12:14:26.425661 - Adding extra search path style_models /Volumes/DataFiles/AI/ComfyUI/models/style_models/
2024-12-20T12:14:26.425672 - Adding extra search path unet /Volumes/DataFiles/AI/ComfyUI/models/unet/
2024-12-20T12:14:26.425680 - Adding extra search path upscale_models /Volumes/DataFiles/AI/ComfyUI/models/upscale_models/
2024-12-20T12:14:26.425688 - Adding extra search path vae /Volumes/DataFiles/AI/ComfyUI/models/vae/
2024-12-20T12:14:26.425695 - Adding extra search path vae_approx /Volumes/DataFiles/AI/ComfyUI/models/vae_approx/
2024-12-20T12:14:26.425703 - Adding extra search path animatediff_models /Volumes/DataFiles/AI/ComfyUI/models/animatediff_models/
2024-12-20T12:14:26.425711 - Adding extra search path animatediff_motion_lora /Volumes/DataFiles/AI/ComfyUI/models/animatediff_motion_lora/
2024-12-20T12:14:26.425720 - Adding extra search path animatediff_video_formats /Volumes/DataFiles/AI/ComfyUI/models/animatediff_video_formats/
2024-12-20T12:14:26.425727 - Adding extra search path ipadapter /Volumes/DataFiles/AI/ComfyUI/models/ipadapter/
2024-12-20T12:14:26.425735 - Adding extra search path liveportrait /Volumes/DataFiles/AI/ComfyUI/models/liveportrait/
2024-12-20T12:14:26.425743 - Adding extra search path insightface /Volumes/DataFiles/AI/ComfyUI/models/insightface/
2024-12-20T12:14:26.425751 - Adding extra search path layerstyle /Volumes/DataFiles/AI/ComfyUI/models/layerstyle/
2024-12-20T12:14:26.425759 - Adding extra search path LLM /Volumes/DataFiles/AI/ComfyUI/models/LLM/
2024-12-20T12:14:26.425767 - Adding extra search path Joy_caption /Volumes/DataFiles/AI/ComfyUI/models/Joy_caption/
2024-12-20T12:14:26.425776 - Adding extra search path sams /Volumes/DataFiles/AI/ComfyUI/models/sams/
2024-12-20T12:14:26.425783 - Adding extra search path blip /Volumes/DataFiles/AI/ComfyUI/models/blip/
2024-12-20T12:14:26.425791 - Adding extra search path CogVideo /Volumes/DataFiles/AI/ComfyUI/models/CogVideo/
2024-12-20T12:14:26.425799 - Adding extra search path xlabs /Volumes/DataFiles/AI/ComfyUI/models/xlabs/
2024-12-20T12:14:26.425807 - Adding extra search path instantid /Volumes/DataFiles/AI/ComfyUI/models/instantid/
2024-12-20T12:14:26.425814 - Adding extra search path custom_nodes /Volumes/DataFiles/AI/ComfyUI/custom_nodes/
2024-12-20T12:14:26.425825 - Adding extra search path download_model_base /Volumes/DataFiles/AI/ComfyUI/models
2024-12-20T12:14:26.425837 - Setting output directory to: /Volumes/DataFiles/AI/ComfyUI/output
2024-12-20T12:14:26.425853 - Setting input directory to: /Volumes/DataFiles/AI/ComfyUI/input
2024-12-20T12:14:26.425862 - Setting user directory to: /Volumes/DataFiles/AI/ComfyUI/user
2024-12-20T12:14:26.433858 - [START] Security scan2024-12-20T12:14:26.433868 - 
2024-12-20T12:14:26.930513 - [DONE] Security scan2024-12-20T12:14:26.930549 - 
2024-12-20T12:14:26.986725 - ## ComfyUI-Manager: installing dependencies done.2024-12-20T12:14:26.986774 - 
2024-12-20T12:14:26.986793 - ** ComfyUI startup time:2024-12-20T12:14:26.986806 -  2024-12-20T12:14:26.986819 - 2024-12-20 12:14:26.9867792024-12-20T12:14:26.986830 - 
2024-12-20T12:14:26.986864 - ** Platform:2024-12-20T12:14:26.986875 -  2024-12-20T12:14:26.986887 - Darwin2024-12-20T12:14:26.986897 - 
2024-12-20T12:14:26.986909 - ** Python version:2024-12-20T12:14:26.986919 -  2024-12-20T12:14:26.986929 - 3.12.4 (main, Jul 25 2024, 22:11:22) [Clang 18.1.8 ]2024-12-20T12:14:26.986938 - 
2024-12-20T12:14:26.986948 - ** Python executable:2024-12-20T12:14:26.987002 -  2024-12-20T12:14:26.987068 - /Volumes/DataFiles/AI/ComfyUI/.venv/bin/python2024-12-20T12:14:26.987085 - 
2024-12-20T12:14:26.987098 - ** ComfyUI Path:2024-12-20T12:14:26.987109 -  2024-12-20T12:14:26.987119 - /Applications/ComfyUI.app/Contents/Resources/ComfyUI2024-12-20T12:14:26.987128 - 
2024-12-20T12:14:26.987170 - ** Log path:2024-12-20T12:14:26.987179 -  2024-12-20T12:14:26.987188 - /Volumes/DataFiles/AI/ComfyUI/comfyui.log2024-12-20T12:14:26.987198 - 
2024-12-20T12:14:29.558648 - 
Prestartup times for custom nodes:2024-12-20T12:14:29.558702 - 
2024-12-20T12:14:29.558724 -    3.1 seconds:2024-12-20T12:14:29.558740 -  2024-12-20T12:14:29.558753 - /Applications/ComfyUI.app/Contents/Resources/ComfyUI/custom_nodes/ComfyUI-Manager2024-12-20T12:14:29.558764 - 
2024-12-20T12:14:29.558776 - 
2024-12-20T12:14:37.576143 - Total VRAM 16384 MB, total RAM 16384 MB
2024-12-20T12:14:37.576249 - pytorch version: 2.6.0.dev20241219
2024-12-20T12:14:37.576370 - Set vram state to: SHARED
2024-12-20T12:14:37.576405 - Device: mps
2024-12-20T12:14:40.340653 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-12-20T12:14:41.097877 - [Prompt Server] web root: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/web_custom_versions/desktop_app
2024-12-20T12:14:42.820019 - [Crystools [0;32mINFO[0m] Crystools version: 1.21.0
2024-12-20T12:14:42.861423 - [Crystools [0;32mINFO[0m] CPU: Apple M1 Pro - Arch: arm64 - OS: Darwin 23.6.0
2024-12-20T12:14:42.861689 - [Crystools [0;31mERROR[0m] Could not init pynvml (Nvidia).NVML Shared Library Not Found
2024-12-20T12:14:42.861802 - [Crystools [0;33mWARNING[0m] No GPU with CUDA detected.
2024-12-20T12:14:42.864866 - ### Loading: ComfyUI-Manager (V2.55.3)2024-12-20T12:14:42.864887 - 
2024-12-20T12:14:42.872503 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository)2024-12-20T12:14:42.872561 - 
2024-12-20T12:14:42.875010 - 
Import times for custom nodes:
2024-12-20T12:14:42.875094 -    0.0 seconds: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/custom_nodes/websocket_image_save.py
2024-12-20T12:14:42.875125 -    0.0 seconds: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/custom_nodes/ComfyUI-Manager
2024-12-20T12:14:42.875151 -    0.1 seconds: /Volumes/DataFiles/AI/ComfyUI/custom_nodes/ComfyUI-Crystools
2024-12-20T12:14:42.875442 - 
2024-12-20T12:14:42.879879 - Starting server

2024-12-20T12:14:42.880545 - To see the GUI go to: http://127.0.0.1:8000
2024-12-20T12:14:43.179704 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-20T12:14:43.179734 - 
2024-12-20T12:14:43.195065 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-20T12:14:43.195119 - 
2024-12-20T12:14:43.264023 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-20T12:14:43.264061 - 
2024-12-20T12:14:43.320881 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-20T12:14:43.320917 - 
2024-12-20T12:14:43.424367 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-20T12:14:43.424402 - 
2024-12-20T12:14:43.849450 - FETCH DATA from: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json2024-12-20T12:14:43.849472 - 2024-12-20T12:14:43.852521 -  [DONE]2024-12-20T12:14:43.852551 - 
2024-12-20T12:14:47.858333 - got prompt
2024-12-20T12:14:48.021450 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-12-20T12:14:48.021793 - model_type FLOW
2024-12-20T12:15:26.127902 - Using split attention in VAE
2024-12-20T12:15:26.148042 - Using split attention in VAE
2024-12-20T12:15:27.375117 - Requested to load FluxClipModel_
2024-12-20T12:15:27.385882 - loaded completely 9.5367431640625e+25 4777.53759765625 True
2024-12-20T12:15:39.648306 - loaded straight to GPU
2024-12-20T12:15:39.648875 - Requested to load Flux
2024-12-20T12:15:39.659474 - loaded completely 9.5367431640625e+25 11340.311584472656 True
2024-12-20T12:15:50.561892 - 
  0%|          | 0/20 [00:00<?, ?it/s]2024-12-20T12:15:57.667033 - 
  0%|          | 0/20 [00:07<?, ?it/s]2024-12-20T12:15:57.667111 - 
2024-12-20T12:15:57.686821 - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2024-12-20T12:15:57.693757 - Traceback (most recent call last):
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1502, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1469, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1013, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 911, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 897, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 866, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 850, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 707, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/k_diffusion/sampling.py", line 155, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 379, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 832, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 835, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 359, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 195, in calc_cond_batch
    return executor.execute(model, conds, x_in, timestep, model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 308, in _calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 129, in apply_model
    return comfy.patcher_extension.WrapperExecutor.new_class_executor(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_base.py", line 158, in _apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/flux/model.py", line 204, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options, attn_mask=kwargs.get("attention_mask", None))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ldm/flux/model.py", line 109, in forward_orig
    img = self.img_in(img)
          ^^^^^^^^^^^^^^^^
  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/DataFiles/AI/ComfyUI/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 63, in forward_comfy_cast_weights
    weight, bias = cast_bias_weight(self, input)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 42, in cast_bias_weight
    bias = comfy.model_management.cast_to(s.bias, bias_dtype, device, non_blocking=non_blocking, copy=has_function)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 830, in cast_to
    return weight.to(dtype=dtype, copy=copy)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

2024-12-20T12:15:57.696732 - Prompt executed in 69.84 seconds
2024-12-20T12:16:02.576737 - Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":10,"last_link_id":10,"nodes":[{"id":7,"type":"CLIPTextEncode","pos":[338.11053466796875,375.502685546875],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":5,"label":"clip"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0,"label":"CONDITIONING"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":6,"type":"CLIPTextEncode","pos":[352.57183837890625,132.00039672851562],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":3,"label":"clip"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0,"label":"CONDITIONING"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"]},{"id":3,"type":"KSampler","pos":[907.063232421875,36.91522216796875],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1,"label":"model"},{"name":"positive","type":"CONDITIONING","link":4,"label":"positive"},{"name":"negative","type":"CONDITIONING","link":6,"label":"negative"},{"name":"latent_image","type":"LATENT","link":2,"label":"latent_image"}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0,"label":"LATENT"}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[615875261343376,"randomize",20,8,"euler","normal",1]},{"id":8,"type":"VAEDecode","pos":[1289.130615234375,-84.66534423828125],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7,"label":"samples"},{"name":"vae","type":"VAE","link":8,"label":"vae"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9,10],"slot_index":0,"label":"IMAGE"}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":5,"type":"EmptyLatentImage","pos":[524.8822021484375,628.96484375],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[2],"slot_index":0,"label":"LATENT"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,512,1]},{"id":10,"type":"PreviewImage","pos":[1637.43505859375,-108.10433197021484],"size":[210,26],"flags":{},"order":7,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":10,"label":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":9,"type":"SaveImage","pos":[1649.5225830078125,379.83465576171875],"size":[210,58],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9,"label":"images"}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":4,"type":"CheckpointLoaderSimple","pos":[-84.388916015625,-21.341339111328125],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"slot_index":0,"label":"MODEL"},{"name":"CLIP","type":"CLIP","links":[3,5],"slot_index":1,"label":"CLIP"},{"name":"VAE","type":"VAE","links":[8],"slot_index":2,"label":"VAE"}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["FLUX1/flux1-schnell-fp8.safetensors"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[9,8,0,9,0,"IMAGE"],[10,8,0,10,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.6727499949325625,"offset":[274.30150959345883,202.1927435861784]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here) Снимок экрана 2024-12-20 в 12 19 09

voxelium avatar Dec 20 '24 08:12 voxelium

Did anyone find a solution for this

SamplerCustomAdvanced Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.

I'm on a macbook pro - updated pytorch to the latest

bigeye-studios avatar Jan 08 '25 05:01 bigeye-studios

Same here, macbook pro m4

salmazov avatar Jan 09 '25 16:01 salmazov

I'm getting this too M3

jacobprice808 avatar Jan 10 '25 02:01 jacobprice808

Help us someone

YFrtn avatar Jan 15 '25 18:01 YFrtn

same problem, M3 max

cavemanlee avatar Jan 21 '25 07:01 cavemanlee

Our MPS-dependent graphics processors can't handle these operations, so we have to pass them on to the CPU. Before starting ComfyUI, enter this into your terminal: export PYTORCH_ENABLE_MPS_FALLBACK=1

This gives your GPU the greenlight to pass on the calculations over to your CPU, which will be able to compute them, but at a relatively slower rate. Using Flux on Mac is a little tough because our GPUs miss out on many of the common speedup techniques like torch compile or bitsandbytes, so you might want to stick with the excellent SDXL finetunes available online.

If you want to use Flux, you will either need to use a higher precision fp16 safetensor, or get your hands on a quantized fp8 model and load it via the GGUF loader node. Of these, the Q4_K_M quants tend to be the most reliable when it comes to our temperamental little Macbooks, and you can find some that allow you to output images within 4-12 steps. An easy place to start with would be by installing the ComfyUI GGUF custom nodes and then going through the incredibly useful collections on Huggingface from Calcuis, where you'll find quantised versions of many popular models like Flux, LTXVideo, HunyuanVideo, and more.

-I'm no expert, so I hope someone more knowledgable will come along and either verify or debunk the above-

Sharky-Mac avatar Jan 31 '25 17:01 Sharky-Mac

ComfyUI Error Report

Error Details

  • Node Type: XlabsSampler
  • Exception Type: torch.cuda.OutOfMemoryError
  • Exception Message: Allocation on device

Stack Trace

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\nodes.py", line 397, in sampling
    inmodel.diffusion_model.to(device)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
    return self._apply(convert)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)

  [Previous line repeated 1 more time]

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
    param_applied = fn(param)

  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
    return t.to(

System Information

  • ComfyUI Version: v0.2.2
  • Arguments: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3070 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8589410304
    • VRAM Free: 4396034880
    • Torch VRAM Total: 3053453312
    • Torch VRAM Free: 62270272

Logs

2025-02-28 19:34:18,440 - root - INFO - Total VRAM 8192 MB, total RAM 32556 MB
2025-02-28 19:34:18,440 - root - INFO - pytorch version: 2.3.1+cu121
2025-02-28 19:34:21,740 - root - INFO - xformers version: 0.0.27
2025-02-28 19:34:21,740 - root - INFO - Set vram state to: NORMAL_VRAM
2025-02-28 19:34:21,740 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3070 Ti : cudaMallocAsync
2025-02-28 19:34:22,082 - root - INFO - Using xformers cross attention
2025-02-28 19:34:23,323 - root - INFO - [Prompt Server] web root: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\web
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path checkpoints G:\ai\秋叶novelai-webui-aki-v3\models/Stable-diffusion
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path configs G:\ai\秋叶novelai-webui-aki-v3\models/Stable-diffusion
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path vae G:\ai\秋叶novelai-webui-aki-v3\models/VAE
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path loras G:\ai\秋叶novelai-webui-aki-v3\models/Lora
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path loras G:\ai\秋叶novelai-webui-aki-v3\models/LyCORIS
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path upscale_models G:\ai\秋叶novelai-webui-aki-v3\models/ESRGAN
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path upscale_models G:\ai\秋叶novelai-webui-aki-v3\models/RealESRGAN
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path upscale_models G:\ai\秋叶novelai-webui-aki-v3\models/SwinIR
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path embeddings G:\ai\秋叶novelai-webui-aki-v3\embeddings
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path hypernetworks G:\ai\秋叶novelai-webui-aki-v3\models/hypernetworks
2025-02-28 19:34:23,332 - root - INFO - Adding extra search path controlnet G:\ai\秋叶novelai-webui-aki-v3\models/ControlNet
2025-02-28 19:34:24,492 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI-Easy-Use\\__init__.py'

2025-02-28 19:34:24,492 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use module for custom nodes: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI-Easy-Use\\__init__.py'
2025-02-28 19:34:24,495 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI-Flux-Prompt-Saver\\__init__.py'

2025-02-28 19:34:24,495 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Flux-Prompt-Saver module for custom nodes: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI-Flux-Prompt-Saver\\__init__.py'
2025-02-28 19:34:51,730 - root - INFO - Total VRAM 8192 MB, total RAM 32556 MB
2025-02-28 19:34:51,730 - root - INFO - pytorch version: 2.3.1+cu121
2025-02-28 19:34:51,730 - root - INFO - xformers version: 0.0.27
2025-02-28 19:34:51,730 - root - INFO - Set vram state to: NORMAL_VRAM
2025-02-28 19:34:51,730 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3070 Ti : cudaMallocAsync
2025-02-28 19:34:52,743 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 868, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\loaders\peft.py", line 40, in <module>
    from .lora_base import _fetch_state_dict
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\loaders\lora_base.py", line 47, in <module>
    from peft.tuners.tuners_utils import BaseTunerLayer
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\peft\__init__.py", line 22, in <module>
    from .auto import (
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\peft\auto.py", line 32, in <module>
    from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\peft\mapping.py", line 25, in <module>
    from .mixed_model import PeftMixedModel
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\peft\mixed_model.py", line 29, in <module>
    from .peft_model import PeftModel
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\peft\peft_model.py", line 37, in <module>
    from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'EncoderDecoderCache' from 'transformers' (O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\transformers\__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 868, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\models\unets\__init__.py", line 6, in <module>
    from .unet_2d import UNet2DModel
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\models\unets\unet_2d.py", line 24, in <module>
    from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 36, in <module>
    from ..transformers.dual_transformer_2d import DualTransformer2DModel
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\models\transformers\__init__.py", line 6, in <module>
    from .cogvideox_transformer_3d import CogVideoXTransformer3DModel
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\models\transformers\cogvideox_transformer_3d.py", line 22, in <module>
    from ...loaders import PeftAdapterMixin
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 858, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 870, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.loaders.peft because of the following error (look up to see its traceback):
cannot import name 'EncoderDecoderCache' from 'transformers' (O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\transformers\__init__.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold\__init__.py", line 1, in <module>
    from .nodes import MarigoldDepthEstimation, MarigoldDepthEstimationVideo, ColorizeDepthmap, SaveImageOpenEXR, RemapDepth
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold\nodes.py", line 8, in <module>
    from .marigold.model.marigold_pipeline import MarigoldPipeline
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold\marigold\model\marigold_pipeline.py", line 9, in <module>
    from diffusers import (
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 859, in __getattr__
    value = getattr(module, name)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 858, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\diffusers\utils\import_utils.py", line 870, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import diffusers.models.unets.unet_2d_condition because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.peft because of the following error (look up to see its traceback):
cannot import name 'EncoderDecoderCache' from 'transformers' (O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\transformers\__init__.py)

2025-02-28 19:34:52,743 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold module for custom nodes: Failed to import diffusers.models.unets.unet_2d_condition because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.peft because of the following error (look up to see its traceback):
cannot import name 'EncoderDecoderCache' from 'transformers' (O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\transformers\__init__.py)
2025-02-28 19:34:57,700 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI_ExtraModels\\__init__.py'

2025-02-28 19:34:57,701 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_ExtraModels module for custom nodes: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ComfyUI_ExtraModels\\__init__.py'
2025-02-28 19:34:57,726 - numexpr.utils - INFO - Note: NumExpr detected 20 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
2025-02-28 19:34:57,727 - numexpr.utils - INFO - NumExpr defaulting to 8 threads.
2025-02-28 19:34:59,110 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ControlAltAI-Nodes\\__init__.py'

2025-02-28 19:34:59,110 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ControlAltAI-Nodes module for custom nodes: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\ControlAltAI-Nodes\\__init__.py'
2025-02-28 19:34:59,119 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\deforum-x-flux-main\\__init__.py'

2025-02-28 19:34:59,119 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\deforum-x-flux-main module for custom nodes: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\deforum-x-flux-main\\__init__.py'
2025-02-28 19:34:59,954 - root - WARNING - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\nodes.py", line 1993, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 879, in exec_module
  File "<frozen importlib._bootstrap_external>", line 1016, in get_code
  File "<frozen importlib._bootstrap_external>", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\x-flux-comfyui\\__init__.py'

2025-02-28 19:34:59,954 - root - WARNING - Cannot import O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui module for custom nodes: [Errno 2] No such file or directory: 'O:\\秋叶comfyui\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\custom_nodes\\x-flux-comfyui\\__init__.py'
2025-02-28 19:34:59,989 - root - INFO - 
Import times for custom nodes:
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\AIGODLIKE-ComfyUI-Translation
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\sdxl_prompt_styler
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\FreeU_Advanced
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUi_PromptStylers
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Flux-Prompt-Saver
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\deforum-x-flux-main
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ControlAltAI-Nodes
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_ExtraModels
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ControlNet-LLLite-ComfyUI
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_NetDist
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\images-grid-comfy-plugin
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Custom-Scripts
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\websocket_image_save.py
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-WD14-Tagger
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\stability-ComfyUI-nodes
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_TiledKSampler
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\PowerNoiseSuite
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\efficiency-nodes-comfyui
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\Derfuu_ComfyUI_ModdedNodes
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_experiments
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Frame-Interpolation
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui_fk_server-main
2025-02-28 19:34:59,989 - root - INFO -    0.0 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui-model-downloader
2025-02-28 19:34:59,989 - root - INFO -    0.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved
2025-02-28 19:34:59,989 - root - INFO -    0.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inspire-Pack
2025-02-28 19:34:59,990 - root - INFO -    0.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_UltimateSDUpscale
2025-02-28 19:34:59,990 - root - INFO -    0.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy
2025-02-28 19:34:59,990 - root - INFO -    0.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-KJNodes
2025-02-28 19:34:59,990 - root - INFO -    0.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Crystools
2025-02-28 19:34:59,990 - root - INFO -    0.2 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux
2025-02-28 19:34:59,990 - root - INFO -    0.2 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use
2025-02-28 19:34:59,990 - root - INFO -    0.2 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_IPAdapter_plus
2025-02-28 19:34:59,990 - root - INFO -    0.2 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui-workspace-manager
2025-02-28 19:34:59,990 - root - INFO -    0.3 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\Diffusers-in-ComfyUI
2025-02-28 19:34:59,990 - root - INFO -    0.4 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-VideoHelperSuite
2025-02-28 19:34:59,990 - root - INFO -    0.4 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\SeargeSDXL
2025-02-28 19:34:59,990 - root - INFO -    0.4 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager
2025-02-28 19:34:59,990 - root - INFO -    0.5 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-02-28 19:34:59,990 - root - INFO -    0.5 seconds (IMPORT FAILED): O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold
2025-02-28 19:34:59,990 - root - INFO -    1.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_FizzNodes
2025-02-28 19:34:59,990 - root - INFO -    3.7 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
2025-02-28 19:34:59,990 - root - INFO -   27.1 seconds: O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack
2025-02-28 19:34:59,990 - root - INFO - 
2025-02-28 19:35:00,003 - root - INFO - Starting server

2025-02-28 19:35:00,003 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2025-02-28 19:37:32,661 - root - INFO - got prompt
2025-02-28 19:37:32,665 - root - ERROR - Failed to validate prompt for output 53:
2025-02-28 19:37:32,665 - root - ERROR - * LoadImage 48:
2025-02-28 19:37:32,665 - root - ERROR -   - Custom validation failed for node: image - Invalid image file: 1.jpg
2025-02-28 19:37:32,665 - root - ERROR - Output will be ignored
2025-02-28 19:37:32,665 - root - WARNING - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-02-28 19:37:41,123 - root - INFO - got prompt
2025-02-28 19:37:41,384 - root - INFO - Using xformers attention in VAE
2025-02-28 19:37:41,385 - root - INFO - Using xformers attention in VAE
2025-02-28 19:37:49,935 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2025-02-28 19:37:49,936 - root - INFO - model_type FLUX
2025-02-28 19:44:27,162 - root - WARNING - clip missing: ['text_projection.weight']
2025-02-28 19:44:27,660 - root - INFO - Requested to load FluxClipModel_
2025-02-28 19:44:27,660 - root - INFO - Loading 1 new model
2025-02-28 19:44:28,636 - root - INFO - loaded completely 0.0 4777.53759765625 True
2025-02-28 19:44:32,102 - root - ERROR - !!! Exception during processing !!! Allocation on device 
2025-02-28 19:44:32,141 - root - ERROR - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\nodes.py", line 225, in loadmodel
    controlnet = load_controlnet(model_name, device)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\xflux\src\flux\util.py", line 275, in load_controlnet
    controlnet = ControlNetFlux(configs[name].params)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\xflux\src\flux\controlnet.py", line 64, in __init__
    [
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\xflux\src\flux\controlnet.py", line 65, in <listcomp>
    DoubleStreamBlock(
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\xflux\src\flux\modules\layers.py", line 285, in __init__
    nn.Linear(mlp_hidden_dim, hidden_size, bias=True),
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\linear.py", line 98, in __init__
    self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_device.py", line 78, in __torch_function__
    return func(*args, **kwargs)
torch.cuda.OutOfMemoryError: Allocation on device 

2025-02-28 19:44:32,142 - root - ERROR - Got an OOM, unloading all loaded models.
2025-02-28 19:44:33,301 - root - INFO - Prompt executed in 412.16 seconds
2025-02-28 19:45:08,860 - root - INFO - got prompt
2025-02-28 19:45:51,133 - root - INFO - Requested to load Flux
2025-02-28 19:45:51,134 - root - INFO - Loading 1 new model
2025-02-28 19:45:51,896 - root - INFO - loaded partially 2773.1855590820314 2773.0927734375 0
2025-02-28 19:45:57,207 - root - ERROR - !!! Exception during processing !!! Allocation on device 
2025-02-28 19:45:57,234 - root - ERROR - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\nodes.py", line 397, in sampling
    inmodel.diffusion_model.to(device)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
    return self._apply(convert)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
    param_applied = fn(param)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
    return t.to(
torch.cuda.OutOfMemoryError: Allocation on device 

2025-02-28 19:45:57,234 - root - ERROR - Got an OOM, unloading all loaded models.
2025-02-28 19:45:58,500 - root - INFO - Prompt executed in 49.63 seconds
2025-02-28 19:46:21,935 - root - INFO - got prompt
2025-02-28 19:46:21,962 - root - INFO - Requested to load Flux
2025-02-28 19:46:21,962 - root - INFO - Loading 1 new model
2025-02-28 19:46:22,538 - root - INFO - loaded partially 2773.1855590820314 2773.0927734375 0
2025-02-28 19:46:22,915 - root - ERROR - !!! Exception during processing !!! Allocation on device 
2025-02-28 19:46:22,916 - root - ERROR - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\nodes.py", line 397, in sampling
    inmodel.diffusion_model.to(device)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
    return self._apply(convert)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
    param_applied = fn(param)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
    return t.to(
torch.cuda.OutOfMemoryError: Allocation on device 

2025-02-28 19:46:22,916 - root - ERROR - Got an OOM, unloading all loaded models.
2025-02-28 19:46:24,006 - root - INFO - Prompt executed in 2.06 seconds
2025-02-28 19:46:38,061 - root - INFO - got prompt
2025-02-28 19:46:38,089 - root - INFO - Requested to load Flux
2025-02-28 19:46:38,089 - root - INFO - Loading 1 new model
2025-02-28 19:46:38,753 - root - INFO - loaded partially 2773.1855590820314 2773.0927734375 0
2025-02-28 19:46:39,157 - root - ERROR - !!! Exception during processing !!! Allocation on device 
2025-02-28 19:46:39,158 - root - ERROR - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\nodes.py", line 397, in sampling
    inmodel.diffusion_model.to(device)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
    return self._apply(convert)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
    param_applied = fn(param)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
    return t.to(
torch.cuda.OutOfMemoryError: Allocation on device 

2025-02-28 19:46:39,158 - root - ERROR - Got an OOM, unloading all loaded models.
2025-02-28 19:46:40,227 - root - INFO - Prompt executed in 2.16 seconds
2025-02-28 19:46:52,310 - root - INFO - got prompt
2025-02-28 19:46:52,338 - root - INFO - Requested to load Flux
2025-02-28 19:46:52,338 - root - INFO - Loading 1 new model
2025-02-28 19:46:52,974 - root - INFO - loaded partially 2773.1855590820314 2773.0927734375 0
2025-02-28 19:46:53,394 - root - ERROR - !!! Exception during processing !!! Allocation on device 
2025-02-28 19:46:53,395 - root - ERROR - Traceback (most recent call last):
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\x-flux-comfyui-main\nodes.py", line 397, in sampling
    inmodel.diffusion_model.to(device)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
    return self._apply(convert)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
    param_applied = fn(param)
  File "O:\秋叶comfyui\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1159, in convert
    return t.to(
torch.cuda.OutOfMemoryError: Allocation on device 

2025-02-28 19:46:53,395 - root - ERROR - Got an OOM, unloading all loaded models.
2025-02-28 19:46:54,469 - root - INFO - Prompt executed in 2.15 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":53,"last_link_id":75,"nodes":[{"id":41,"type":"UNETLoader","pos":{"0":-840,"1":-360},"size":{"0":315,"1":82},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[61,66],"slot_index":0,"shape":3,"label":"MODEL"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux1-FP8-dev.safetensors","fp8_e4m3fn"]},{"id":42,"type":"DualCLIPLoader","pos":{"0":-840,"1":-180},"size":{"0":315,"1":106},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[58,72],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["t5xxl_fp8_e4m3fn.safetensors","clip_l.safetensors","flux"]},{"id":36,"type":"CLIPTextEncodeFlux","pos":{"0":-120,"1":-480},"size":{"0":420,"1":300},"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":58,"label":"clip"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[67],"slot_index":0,"shape":3,"label":"CONDITIONING"}],"properties":{"Node name for S&R":"CLIPTextEncodeFlux"},"widgets_values":["a young woman smiling while speaking onstage from segmind, white background with corporate logos blurred out, tech conference","a young woman smiling while speaking onstage from segmind, white background with corporate logos blurred out, tech conference",3.5,true,true],"color":"#232","bgcolor":"#353"},{"id":51,"type":"CLIPTextEncode","pos":{"0":-480,"1":-300},"size":{"0":400,"1":200},"flags":{"collapsed":true},"order":8,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":72,"label":"clip"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[71],"shape":3,"label":"CONDITIONING"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["",true]},{"id":47,"type":"ApplyFluxControlNet","pos":{"0":-480,"1":120},"size":{"0":393,"1":98},"flags":{},"order":10,"mode":0,"inputs":[{"name":"controlnet","type":"FluxControlNet","link":65,"label":"controlnet"},{"name":"image","type":"IMAGE","link":74,"label":"image"},{"name":"controlnet_condition","type":"ControlNetCondition","link":null,"label":"controlnet_condition"}],"outputs":[{"name":"controlnet_condition","type":"ControlNetCondition","links":[70],"slot_index":0,"shape":3,"label":"controlnet_condition"}],"properties":{"Node name for S&R":"ApplyFluxControlNet"},"widgets_values":[0.7000000000000001]},{"id":43,"type":"LoraLoaderModelOnly","pos":{"0":-480,"1":-480},"size":{"0":315,"1":82},"flags":{},"order":6,"mode":4,"inputs":[{"name":"model","type":"MODEL","link":61,"label":"model"}],"outputs":[{"name":"MODEL","type":"MODEL","links":[],"slot_index":0,"shape":3,"label":"MODEL"}],"properties":{"Node name for S&R":"LoraLoaderModelOnly"},"widgets_values":["flux_XLabs_lora\\realism_lora_comfy_converted.safetensors",1]},{"id":40,"type":"VAELoader","pos":{"0":-840,"1":-480},"size":{"0":315,"1":58},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[57],"slot_index":0,"shape":3,"label":"VAE"}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["FLUX.1-vae.sft"]},{"id":50,"type":"EmptyLatentImage","pos":{"0":60,"1":540},"size":{"0":315,"1":106},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[68],"shape":3,"label":"LATENT"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,768,1]},{"id":52,"type":"CannyEdgePreprocessor","pos":{"0":-480,"1":300},"size":{"0":315,"1":106},"flags":{},"order":9,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":73,"label":"image"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[74],"slot_index":0,"shape":3,"label":"IMAGE"}],"properties":{"Node name for S&R":"CannyEdgePreprocessor"},"widgets_values":[100,200,1024]},{"id":8,"type":"VAEDecode","pos":{"0":360,"1":-480},"size":{"0":140,"1":46},"flags":{},"order":12,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":69,"label":"samples"},{"name":"vae","type":"VAE","link":57,"label":"vae"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[75],"slot_index":0,"label":"IMAGE"}],"properties":{"Node name for S&R":"VAEDecode"},"color":"#2a363b","bgcolor":"#3f5159"},{"id":53,"type":"PreviewImage","pos":{"0":540,"1":-480},"size":{"0":540,"1":720},"flags":{},"order":13,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":75,"label":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"color":"#232","bgcolor":"#353"},{"id":48,"type":"LoadImage","pos":{"0":-840,"1":300},"size":[300,314],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[73],"slot_index":0,"shape":3,"label":"IMAGE"},{"name":"MASK","type":"MASK","links":null,"shape":3,"label":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["净胜.jpg","image"]},{"id":46,"type":"LoadFluxControlNet","pos":{"0":-840,"1":120},"size":{"0":315,"1":82},"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"ControlNet","type":"FluxControlNet","links":[65],"slot_index":0,"shape":3,"label":"ControlNet"}],"properties":{"Node name for S&R":"LoadFluxControlNet"},"widgets_values":["flux-dev-fp8","flux-canny-controlnet-v3.safetensors"]},{"id":49,"type":"XlabsSampler","pos":{"0":60,"1":60},"size":{"0":360,"1":420},"flags":{},"order":11,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":66,"label":"model"},{"name":"conditioning","type":"CONDITIONING","link":67,"label":"conditioning"},{"name":"neg_conditioning","type":"CONDITIONING","link":71,"label":"neg_conditioning"},{"name":"latent_image","type":"LATENT","link":68,"label":"latent_image"},{"name":"controlnet_condition","type":"ControlNetCondition","link":70,"label":"controlnet_condition"}],"outputs":[{"name":"latent","type":"LATENT","links":[69],"slot_index":0,"shape":3,"label":"latent"}],"properties":{"Node name for S&R":"XlabsSampler"},"widgets_values":[799681971949447,"randomize",20,0,3.5,0.1,1]}],"links":[[57,40,0,8,1,"VAE"],[58,42,0,36,0,"CLIP"],[61,41,0,43,0,"MODEL"],[65,46,0,47,0,"FluxControlNet"],[66,41,0,49,0,"MODEL"],[67,36,0,49,1,"CONDITIONING"],[68,50,0,49,3,"LATENT"],[69,49,0,8,0,"LATENT"],[70,47,0,49,4,"ControlNetCondition"],[71,51,0,49,2,"CONDITIONING"],[72,42,0,51,0,"CLIP"],[73,48,0,52,0,"IMAGE"],[74,52,0,47,1,"IMAGE"],[75,8,0,53,0,"IMAGE"]],"groups":[{"title":"采样器","bounding":[18,-45,482,715],"color":"#3f789e","font_size":24,"flags":{}},{"title":"Controlnet","bounding":[-970,-48,975,718],"color":"#3f789e","font_size":24,"flags":{}},{"title":"Flux-Controlnet","bounding":[-972,-567,1472,512],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":1.771561000000001,"offset":{"0":639.827392578125,"1":161.00502014160156}},"workspace_info":{"id":"cF14JHNEa0QU8y_o2bgEY","saveLock":false,"cloudID":null,"coverMediaPath":null},"0246.VERSION":[0,0,4]},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

fredyuntian avatar Feb 28 '25 12:02 fredyuntian