TorchCompileModelWanVideo - Failed to compile model
Your question
got prompt
!!! Exception during processing !!! Failed to compile model
Traceback (most recent call last):
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes\nodes\model_optimization_nodes.py", line 497, in patch
compiled_model = torch.compile(diffusion_model, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\lightning_fabric\wrappers.py", line 409, in capture
compiled_model = compile_fn(model, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_init.py", line 1891, in compile
return torch.dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_dynamo\eval_frame.py", line 681, in optimize
compiler_config=backend.get_compiler_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_init.py", line 1732, in get_compiler_config
from torch.inductor.compile_fx import get_patched_config_dict
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_inductor\compile_fx.py", line 57, in
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 345, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 220, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 181, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes\nodes\model_optimization_nodes.py", line 508, in patch raise RuntimeError("Failed to compile model") RuntimeError: Failed to compile model
Prompt executed in 0.03 seconds the same error:
Logs
Other
No response
Which version of PyTorch do you have? Could you try using the nightly version?
I fix this by update torch as said by Lecho303
Which version of PyTorch do you have? Could you try using the nightly version?
wao. wonderfull. i fix by install "new comfyui". old your comfyui, my py.torch is 2.3. in new comfyui, my pytorch is version: 2.6.0+cu126. "The error 'TorchCompileModelWanVideo - Failed to compile model' no longer appears.
@TheWingAg90 glad to hear:). Just fyi if you try out the nightly version (it'll be something like 2.8.0 rather than the 2.6.0 you have), you might get even more speedups (especially for GGUF models), and we are also investigating for more potential performance gains.
thanks. in my country, it is 12h41. i will try on tomorrrow, g9. :D have a nice day
Does Torch 2.8 work with CUDA 12.6?
I tried updating to 2.6 nightly and it gave me a SM89 error instead. Same with 2.8. I am using CUDA 12.6 btw. I rolled back to 2.6 stable for now but I no longer have Model Compile functionality.
Does Torch 2.8 work with CUDA 12.6?
It should.
I tried updating to 2.6 nightly and it gave me a SM89 error
Could you share the error?
I rolled back to 2.6 stable for now but I no longer have Model Compile functionality
By "no longer have Model Compile functionality", do you mean the node is gone, or it errors in some way. If it errors, could you share the error as well?
Does Torch 2.8 work with CUDA 12.6?
in my case, pytorch version: 2.6.0+cu126 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3090 : native Using sage attention Python version: 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)] ComfyUI version: 0.3.28. And i thinks "SM89 error" when u chose "..fp8_cuda". u can chose fp16 cuda or fp16 triton, i thing its
Does Torch 2.8 work with CUDA 12.6?
It should.
I tried updating to 2.6 nightly and it gave me a SM89 error
Could you share the error?
I rolled back to 2.6 stable for now but I no longer have Model Compile functionality
By "no longer have Model Compile functionality", do you mean the node is gone, or it errors in some way. If it errors, could you share the error as well?
No. The node is there but it gives the compile error same as OP states if enabled.
I had to reinstall 2.8 nightly just so I could copy and paste the error.
!!! Exception during processing !!! SM89 kernel is not available. Make sure you GPUs with compute capability 8.9.
Traceback (most recent call last):
File "e:\ComfyUI_windows_portable\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "e:\ComfyUI_windows_portable\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 494, in sample
samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 50, in sample_custom
samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1023, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1008, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 111, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 976, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 959, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 111, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 738, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 868, in sample_unipc
x = uni_pc.sample(noise, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True, callback=callback, disable_pbar=disable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 715, in sample
model_prev_list = [self.model_fn(x, vec_t)]
^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 410, in model_fn
return self.data_prediction_fn(x, t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 394, in data_prediction_fn
noise = self.noise_prediction_fn(x, t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 388, in noise_prediction_fn
return self.model(x, t)
^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 329, in model_fn
return noise_pred_fn(x, t_continuous)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 297, in noise_pred_fn
output = model(x, t_input, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\ComfyUI_windows_portable\ComfyUI\comfy\extra_samplers\uni_pc.py", line 859, in
Prompt executed in 61.12 seconds
@deadman3000 can you try what @TheWingAg90 suggested?
And i thinks "SM89 error" when u chose "..fp8_cuda". u can chose fp16 cuda or fp16 triton
Also, do you see this error if you disable the TorchCompile node? (You likely need to restart comfyui as well).
@deadman3000 can you try what @TheWingAg90 suggested?
And i thinks "SM89 error" when u chose "..fp8_cuda". u can chose fp16 cuda or fp16 triton
Also, do you see this error if you disable the TorchCompile node? (You likely need to restart comfyui as well).
I cannot keep uninstalling/reinstalling nightlies to check. I have given up on Compile Model for now and am running 2.6 stable without Compile Model enabled until someone figures this out.
@deadman3000 can you try what @TheWingAg90 suggested?
And i thinks "SM89 error" when u chose "..fp8_cuda". u can chose fp16 cuda or fp16 triton
Also, do you see this error if you disable the TorchCompile node? (You likely need to restart comfyui as well).
I cannot keep uninstalling/reinstalling nightlies to check. I have given up on Compile Model for now and am running 2.6 stable without Compile Model enabled until someone figures this out.
do use workflow in comfyui? sent me wl and image ' "error node". what's your VGA?
@deadman3000 can you try what @TheWingAg90 suggested?
And i thinks "SM89 error" when u chose "..fp8_cuda". u can chose fp16 cuda or fp16 triton
Also, do you see this error if you disable the TorchCompile node? (You likely need to restart comfyui as well).
I cannot keep uninstalling/reinstalling nightlies to check. I have given up on Compile Model for now and am running 2.6 stable without Compile Model enabled until someone figures this out.
do use workflow in comfyui? sent me wl and image ' "error node". what's your VGA?
4080 Super. Workflow is based upon this one. https://civitai.com/models/1385056?modelVersionId=1565128
After another 'update all' in comfy and repairing opencv headless in ther layerstyles node, plus fixing a couple of other custom node path issues everything is working again and I get no errors other than florence2 moaning about transformers like it always does (but does not prevent it from running). Also no more errors at startup of comfy at least for now.
As for the SM89 error that is related to nightly and I am unwilling to 'update' again to either 2.6 or 2.8 unless I have to. Sure it would be nice to have some additional generation speed but I prefer stability.
I got the same error!
"TorchCompileModelWanVideo Failed to compile model"
I updated pytorch and other stuff like Python and Cuda, but doesn't seem to fix my issue.
Python version: 3.12.10 pytorch version: 2.7.0+cu128 Cuda ver: 12.8.0 (also tried with 12.9.0 but no difference in error) GPU: Nvidia 4080 Windows 10
!!! Exception during processing !!! Failed to compile model
Traceback (most recent call last):
File "B:\ComfyUi_Works\ComfyUI\custom_nodes\comfyui-kjnodes\nodes\model_optimization_nodes.py", line 505, in patch
compiled_block = torch.compile(block, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "B:\ComfyUi_Works\python_embeded\Lib\site-packages\torch_init_.py", line 2572, in compile
return torch._dynamo.optimize(
^^^^^^^^^^^^^^^^^^^^^^^
File "B:\ComfyUi_Works\python_embeded\Lib\site-packages\torch_dynamo\eval_frame.py", line 944, in optimize
return _optimize(rebuild_ctx, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "B:\ComfyUi_Works\python_embeded\Lib\site-packages\torch_dynamo\eval_frame.py", line 1019, in optimize
backend.get_compiler_config()
File "B:\ComfyUi_Works\python_embeded\Lib\site-packages\torch_init.py", line 2350, in get_compiler_config
from torch._inductor.compile_fx import get_patched_config_dict
File "B:\ComfyUi_Works\python_embeded\Lib\site-packages\torch_inductor\compile_fx.py", line 64, in
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "B:\ComfyUi_Works\ComfyUI\execution.py", line 347, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ComfyUi_Works\ComfyUI\execution.py", line 222, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ComfyUi_Works\ComfyUI\execution.py", line 194, in _map_node_over_list process_inputs(input_dict, i) File "B:\ComfyUi_Works\ComfyUI\execution.py", line 183, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "B:\ComfyUi_Works\ComfyUI\custom_nodes\comfyui-kjnodes\nodes\model_optimization_nodes.py", line 519, in patch raise RuntimeError("Failed to compile model") RuntimeError: Failed to compile model
I've been on this for 3 days and i still couldn't fix it after all the github suggestions and youtube guides.
@Alicequinz looks like some triton related error on windows. I've heard that getting triton to work on Windows take some work. Maybe try something like this post?
Ah, thank you for the help! I did get it to work, but sadly there isn't any difference in speed of generating. I spent too much time on this (4 days in total) and it only turned out that there was no change at the end. It's my first time doing ai Videos with ComfyUI and I've been trying to speed up stuff, but i can't figure out why it's not generating faster. (It does generate at least 6-7 minutes with "run_nvidia_gpu_fast_fp16_accumulation.bat" instead of 30 minutes to 1 hour with "run_nvidia_gpu.bat")
I have an Nvidia 4080, 16 ram but low CPU with Intel I5-8400 CPU. I don't know if the issue is in the cpu or ram or GPU, but it's so slow compared to generating images in Stable diffusion. But i don't really want to use stable diffusion anymore because it's just for images. I really want to enjoy the magic of Ai videos.
On Mon, May 5, 2025 at 9:52โฏPM Xiangxi Guo (Ryan) @.***> wrote:
StrongerXi left a comment (comfyanonymous/ComfyUI#7611) https://github.com/comfyanonymous/ComfyUI/issues/7611#issuecomment-2852010720
@Alicequinz https://github.com/Alicequinz looks like some triton related error on windows. I've heard that getting triton to work on Windows take some work. Maybe try something like this post https://www.reddit.com/r/StableDiffusion/comments/1g45n6n/triton_3_wheels_published_for_windows_and_working/ ?
โ Reply to this email directly, view it on GitHub https://github.com/comfyanonymous/ComfyUI/issues/7611#issuecomment-2852010720, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCPEFRNRGDD3TZ6IK2AHVWT246XOPAVCNFSM6AAAAAB3GAIIQGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQNJSGAYTANZSGA . You are receiving this because you were mentioned.Message ID: @.***>
I spent too much time on this (4 days in total) and it only turned out that there was no change at the end.
Sorry to hear that! There are a lot of moving pieces in this field, and the community is constantly trying to make things easier and optimal, but unfortunately one does run into rabbit holes from time to time.
Have you tried using GGUF models? They have much smaller GPU RAM requirements, and we've been seeing consistent speedup from TorchCompile on these models, e.g., this post.
I spent too much time on this (4 days in total) and it only turned out that there was no change at the end.
Sorry to hear that! There are a lot of moving pieces in this field, and the community is constantly trying to make things easier and optimal, but unfortunately one does run into rabbit holes from time to time.
Have you tried using GGUF models? They have much smaller GPU RAM requirements, and we've been seeing consistent speedup from TorchCompile on these models, e.g., this post.
I did try Q3_K_M or Q4_K_M, but there isn't really difference, i wonder if it's just computer issues or something else i don't understand. Overall i'm enjoying the videos it creates, but it's just so slow, that you can't really do anything on the computer except to stare at it for long minutes or watch some reels on your phone to pass the time. Maybe it's normal to be slow like that, but i still hope in the future they can make it much quicker like they did with ai images.
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.
I have the same issue. Running python 3.1.2 pytorch 2.8.0 and a 4090. Retried updating and uninstalling torch with no luck Any help appreciated
ComfyUI Error Report
Error Details
- Node ID: 122
- Node Type: WanVideoSampler
- Exception Type: AssertionError
- Exception Message: SM89 kernel is not available. Make sure you GPUs with compute capability 8.9.
Stack Trace
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3069, in process
noise_pred, self.cache_state = predict_with_cfg(
^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2396, in predict_with_cfg
raise e
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2297, in predict_with_cfg
noise_pred_cond, cache_state_cond = transformer(
^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1822, in forward
x = block(x, **kwargs)
^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 749, in forward
y = self.self_attn.forward(q, k, v, seq_lens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 295, in forward
x = attention(q, k, v, k_lens=seq_lens, attention_mode=attention_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\attention.py", line 199, in attention
return sageattn_func(q, k, v, tensor_layout="NHD").contiguous()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 929, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\attention.py", line 26, in sageattn_func
return sageattn(q, k, v, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal, tensor_layout=tensor_layout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\sageattention\core.py", line 148, in sageattn
return sageattn_qk_int8_pv_fp8_cuda(q, k, v, tensor_layout=tensor_layout, is_causal=is_causal, sm_scale=sm_scale, return_lse=return_lse, pv_accum_dtype="fp32+fp16")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 929, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\sageattention\core.py", line 682, in sageattn_qk_int8_pv_fp8_cuda
assert SM89_ENABLED, "SM89 kernel is not available. Make sure you GPUs with compute capability 8.9."
^^^^^^^^^^^^
System Information
- ComfyUI Version: 0.3.49
- Arguments: ComfyUI\main.py --windows-standalone-build --use-sage-attention
- OS: nt
- Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- Embedded Python: true
- PyTorch Version: 2.8.0+cu128
Devices
-
Name: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
- Type: cuda
- VRAM Total: 25756696576
- VRAM Free: 24110956544
- Torch VRAM Total: 0
- Torch VRAM Free: 0
Logs
2025-08-09T13:24:00.743634 - Adding extra search path checkpoints D:\models\checkpoints
2025-08-09T13:24:00.743634 - Adding extra search path clip D:\models\clip
2025-08-09T13:24:00.743634 - Adding extra search path clip_interrogator D:\models\clip_interrogator
2025-08-09T13:24:00.744634 - Adding extra search path clip_vision F:\StableDiffusion\Data\Models\ClipVision
2025-08-09T13:24:00.744634 - Adding extra search path configs D:\models\configs
2025-08-09T13:24:00.744634 - Adding extra search path controlnet F:\StableDiffusion\Data\Models\ControlNet
2025-08-09T13:24:00.744634 - Adding extra search path diffusers D:\models\diffusers
2025-08-09T13:24:00.744634 - Adding extra search path diffusion_models F:\StableDiffusion\Data\Models\DiffusionModels
2025-08-09T13:24:00.744634 - Adding extra search path embeddings F:\StableDiffusion\Data\Models\Embeddings
2025-08-09T13:24:00.744634 - Adding extra search path gligen D:\models\gligen
2025-08-09T13:24:00.744634 - Adding extra search path hypernetworks D:\models\hypernetworks
2025-08-09T13:24:00.744634 - Adding extra search path LLM D:\models\LLM
2025-08-09T13:24:00.744634 - Adding extra search path llm_gguf D:\models\llm_gguf
2025-08-09T13:24:00.744634 - Adding extra search path loras F:\StableDiffusion\Data\Models\Lora
2025-08-09T13:24:00.744634 - Adding extra search path photomaker D:\models\photomaker
2025-08-09T13:24:00.744634 - Adding extra search path style_models F:\StableDiffusion\Data\Packages\ComfyUI\models\style_models
2025-08-09T13:24:00.744634 - Adding extra search path unet F:\StableDiffusion\Data\Packages\ComfyUI\models\unet
2025-08-09T13:24:00.744634 - Adding extra search path upscale_models F:\StableDiffusion\Data\Packages\ComfyUI\models\upscale_models
2025-08-09T13:24:00.744634 - Adding extra search path vae F:\StableDiffusion\Data\Models\VAE
2025-08-09T13:24:00.744634 - Adding extra search path vae_approx F:\StableDiffusion\Data\Models\ApproxVAE
2025-08-09T13:24:00.744634 - Adding extra search path text_encoders F:\StableDiffusion\Data\Models\TextEncoders
2025-08-09T13:24:00.744634 - Adding extra search path ESRGAN F:\StableDiffusion\Data\Models\ESRGAN
2025-08-09T13:24:00.744634 - Adding extra search path RealESRGAN F:\StableDiffusion\Data\Models\RealESRGAN
2025-08-09T13:24:01.507388 - [START] Security scan2025-08-09T13:24:01.507388 -
2025-08-09T13:24:04.123783 - [DONE] Security scan2025-08-09T13:24:04.123783 -
2025-08-09T13:24:04.225105 - ## ComfyUI-Manager: installing dependencies done.2025-08-09T13:24:04.225105 -
2025-08-09T13:24:04.225105 - ** ComfyUI startup time:2025-08-09T13:24:04.225105 - 2025-08-09T13:24:04.225105 - 2025-08-09 13:24:04.2252025-08-09T13:24:04.226125 -
2025-08-09T13:24:04.226125 - ** Platform:2025-08-09T13:24:04.226125 - 2025-08-09T13:24:04.226125 - Windows2025-08-09T13:24:04.226125 -
2025-08-09T13:24:04.226125 - ** Python version:2025-08-09T13:24:04.226125 - 2025-08-09T13:24:04.226125 - 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]2025-08-09T13:24:04.226125 -
2025-08-09T13:24:04.226125 - ** Python executable:2025-08-09T13:24:04.227125 - 2025-08-09T13:24:04.227125 - F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\python.exe2025-08-09T13:24:04.227125 -
2025-08-09T13:24:04.227125 - ** ComfyUI Path:2025-08-09T13:24:04.227125 - 2025-08-09T13:24:04.227125 - F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI2025-08-09T13:24:04.227125 -
2025-08-09T13:24:04.227125 - ** ComfyUI Base Folder Path:2025-08-09T13:24:04.227125 - 2025-08-09T13:24:04.227125 - F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI2025-08-09T13:24:04.227125 -
2025-08-09T13:24:04.228125 - ** User directory:2025-08-09T13:24:04.228125 - 2025-08-09T13:24:04.228125 - F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\user2025-08-09T13:24:04.228125 -
2025-08-09T13:24:04.228125 - ** ComfyUI-Manager config path:2025-08-09T13:24:04.228125 - 2025-08-09T13:24:04.228125 - F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-08-09T13:24:04.228125 -
2025-08-09T13:24:04.228125 - ** Log path:2025-08-09T13:24:04.228125 - 2025-08-09T13:24:04.228125 - F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\user\comfyui.log2025-08-09T13:24:04.228125 -
2025-08-09T13:24:05.303449 -
Prestartup times for custom nodes:
2025-08-09T13:24:05.303449 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\rgthree-comfy
2025-08-09T13:24:05.303449 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2025-08-09T13:24:05.304450 - 4.5 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-manager
2025-08-09T13:24:05.304450 -
2025-08-09T13:24:10.904610 - Checkpoint files will always be loaded safely.
2025-08-09T13:24:11.036467 - Total VRAM 24564 MB, total RAM 130756 MB
2025-08-09T13:24:11.036467 - pytorch version: 2.8.0+cu128
2025-08-09T13:24:11.036467 - Set vram state to: NORMAL_VRAM
2025-08-09T13:24:11.037470 - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2025-08-09T13:24:15.368103 - Using sage attention
2025-08-09T13:24:28.554373 - Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
2025-08-09T13:24:28.554373 - ComfyUI version: 0.3.49
2025-08-09T13:24:28.645418 - ComfyUI frontend version: 1.24.4
2025-08-09T13:24:28.648420 - [Prompt Server] web root: F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\comfyui_frontend_package\static
2025-08-09T13:24:31.990532 - [Crystools [0;32mINFO[0m] Crystools version: 1.26.8
2025-08-09T13:24:32.025001 - [Crystools [0;32mINFO[0m] Platform release: 10
2025-08-09T13:24:32.025001 - [Crystools [0;32mINFO[0m] JETSON: Not detected.
2025-08-09T13:24:32.025528 - [Crystools [0;32mINFO[0m] CPU: Intel(R) Core(TM) i9-14900K - Arch: AMD64 - OS: Windows 10
2025-08-09T13:24:32.044101 - [Crystools [0;32mINFO[0m] pynvml (NVIDIA) initialized.
2025-08-09T13:24:32.044620 - [Crystools [0;32mINFO[0m] GPU/s:
2025-08-09T13:24:32.054633 - [Crystools [0;32mINFO[0m] 0) NVIDIA GeForce RTX 4090
2025-08-09T13:24:32.055627 - [Crystools [0;32mINFO[0m] NVIDIA Driver: 577.00
2025-08-09T13:24:34.964393 - [34m[ComfyUI-Easy-Use] server: [0mv1.3.2 [92mLoaded[0m2025-08-09T13:24:34.964393 -
2025-08-09T13:24:34.964393 - [34m[ComfyUI-Easy-Use] web root: [0mF:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v2 [92mLoaded[0m2025-08-09T13:24:34.964393 -
2025-08-09T13:24:35.482780 - ComfyUI-GGUF: Allowing full torch compile
2025-08-09T13:24:39.264351 - ### Loading: ComfyUI-Manager (V3.35)
2025-08-09T13:24:39.264351 - [ComfyUI-Manager] network_mode: public
2025-08-09T13:24:39.386810 - ### ComfyUI Version: v0.3.49-8-gbf2a1b5b | Released on '2025-08-07'
2025-08-09T13:24:40.311145 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-08-09T13:24:40.371682 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-08-09T13:24:40.390037 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-08-09T13:24:40.459116 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-08-09T13:24:40.524315 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-08-09T13:24:44.190107 - ------------------------------------------2025-08-09T13:24:44.190107 -
2025-08-09T13:24:44.191125 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-08-09T13:24:44.192121 -
2025-08-09T13:24:44.192121 - ------------------------------------------2025-08-09T13:24:44.192121 -
2025-08-09T13:24:44.192121 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-08-09T13:24:44.192121 -
2025-08-09T13:24:44.192121 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-08-09T13:24:44.192121 -
2025-08-09T13:24:44.192121 - ------------------------------------------2025-08-09T13:24:44.192121 -
2025-08-09T13:24:44.244997 - [36;20m[F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ckpts path: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-08-09T13:24:44.245997 - [36;20m[F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-08-09T13:24:44.246997 - [36;20m[F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-08-09T13:24:44.296195 - DWPose: Onnxruntime with acceleration providers detected2025-08-09T13:24:44.296195 -
2025-08-09T13:24:44.641975 -
[32mInitializing ControlAltAI Nodes[0m2025-08-09T13:24:44.642967 -
2025-08-09T13:24:44.892971 - FETCH ComfyRegistry Data: 5/932025-08-09T13:24:44.892971 -
2025-08-09T13:24:44.962668 -
2025-08-09T13:24:44.962668 - [92m[rgthree-comfy] Loaded 48 exciting nodes. ๐[0m2025-08-09T13:24:44.962668 -
2025-08-09T13:24:44.962668 -
2025-08-09T13:24:45.003777 - Traceback (most recent call last):
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\nodes.py", line 2129, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\was-node-suite-comfyui\__init__.py", line 1, in <module>
from .WAS_Node_Suite import NODE_CLASS_MAPPINGS
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 37, in <module>
from numba import jit
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\numba\__init__.py", line 59, in <module>
_ensure_critical_deps()
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\numba\__init__.py", line 45, in _ensure_critical_deps
raise ImportError(msg)
ImportError: Numba needs NumPy 2.2 or less. Got NumPy 2.3.
2025-08-09T13:24:45.003777 - Cannot import F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\was-node-suite-comfyui module for custom nodes: Numba needs NumPy 2.2 or less. Got NumPy 2.3.
2025-08-09T13:24:45.005503 -
Import times for custom nodes:
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\websocket_image_save.py
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\canvas_tab
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-inpaint-cropandstitch
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\janus-pro
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI_AdvancedRefluxControl
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\teacache
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-GGUF
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-seamless-tiling
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-omnigen
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-kjnodes
2025-08-09T13:24:45.005503 - 0.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\controlaltai-nodes
2025-08-09T13:24:45.005503 - 0.0 seconds (IMPORT FAILED): F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\was-node-suite-comfyui
2025-08-09T13:24:45.005503 - 0.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-08-09T13:24:45.005503 - 0.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-LTXVideo
2025-08-09T13:24:45.006506 - 0.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\kaytool
2025-08-09T13:24:45.006506 - 0.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI_Sonic
2025-08-09T13:24:45.006506 - 0.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-08-09T13:24:45.006506 - 0.2 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI_Searge_LLM
2025-08-09T13:24:45.006506 - 0.2 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\rgthree-comfy
2025-08-09T13:24:45.006506 - 0.3 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-ToSVG
2025-08-09T13:24:45.006506 - 0.4 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-advancedliveportrait
2025-08-09T13:24:45.006506 - 0.5 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-itools
2025-08-09T13:24:45.006506 - 0.5 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-Florence2
2025-08-09T13:24:45.006506 - 0.7 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-videohelpersuite
2025-08-09T13:24:45.006506 - 1.0 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-manager
2025-08-09T13:24:45.006506 - 1.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-Crystools
2025-08-09T13:24:45.006506 - 1.1 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-ollama
2025-08-09T13:24:45.006506 - 1.4 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-inspyrenet-rembg
2025-08-09T13:24:45.006506 - 1.7 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper
2025-08-09T13:24:45.006506 - 1.7 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\comfyui-kokoro
2025-08-09T13:24:45.006506 - 2.9 seconds: F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2025-08-09T13:24:45.006506 -
2025-08-09T13:24:45.415248 - Context impl SQLiteImpl.
2025-08-09T13:24:45.415248 - Will assume non-transactional DDL.
2025-08-09T13:24:45.417348 - No target revision found.
2025-08-09T13:24:45.437791 - Starting server
2025-08-09T13:24:45.437791 - To see the GUI go to: http://127.0.0.1:8188
2025-08-09T13:24:46.440041 - Feature flags negotiated for client 8a570630ef204386aa833d0ae937694d: {'supports_preview_metadata': True}
2025-08-09T13:24:48.257633 - got prompt
2025-08-09T13:24:48.813978 - FETCH ComfyRegistry Data: 10/932025-08-09T13:24:48.816985 -
2025-08-09T13:24:52.349529 - Detected model in_channels: 48
2025-08-09T13:24:52.349529 - Model type: t2v, num_heads: 24, num_layers: 30
2025-08-09T13:24:52.349529 - 5B model detected, no Teacache or MagCache coefficients available, consider using EasyCache for this model
2025-08-09T13:24:52.350531 - Model variant detected: 14B
2025-08-09T13:24:52.434508 - model_type FLOW
2025-08-09T13:24:52.434508 - Using accelerate to load and assign model weights to device...
2025-08-09T13:24:52.436507 -
Loading transformer parameters to cpu: 0%| | 0/825 [00:00<?, ?it/s]2025-08-09T13:24:52.463454 -
Loading transformer parameters to cpu: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 825/825 [00:00<00:00, 30615.90it/s]2025-08-09T13:24:52.463454 -
2025-08-09T13:24:52.467453 - Moving diffusion model from cuda:0 to cpu
2025-08-09T13:24:53.552741 - FETCH ComfyRegistry Data: 15/932025-08-09T13:24:53.553742 -
2025-08-09T13:24:53.558736 - Moving video model to cpu
2025-08-09T13:24:57.631869 - FETCH ComfyRegistry Data: 20/932025-08-09T13:24:57.631869 -
2025-08-09T13:25:00.091005 -
T5Encoder: 71%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 17/24 [00:00<00:00, 33.01it/s]2025-08-09T13:25:00.247638 -
T5Encoder: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 24/24 [00:00<00:00, 28.92it/s]2025-08-09T13:25:00.247638 -
2025-08-09T13:25:00.289146 -
T5Encoder: 0%| | 0/24 [00:00<?, ?it/s]2025-08-09T13:25:00.361160 -
T5Encoder: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 24/24 [00:00<00:00, 333.26it/s]2025-08-09T13:25:00.361160 -
2025-08-09T13:25:02.180322 - FETCH ComfyRegistry Data: 25/932025-08-09T13:25:02.180322 -
2025-08-09T13:25:03.697673 - sigmas: tensor([1.0000, 0.9964, 0.9925, 0.9884, 0.9841, 0.9794, 0.9743, 0.9689, 0.9631, 0.9568, 0.9500, 0.9425, 0.9344, 0.9255, 0.9156, 0.9047, 0.8926, 0.8790, 0.8636, 0.8461, 0.8261, 0.8028, 0.7755, 0.7430, 0.7037, 0.6551, 0.5937, 0.5135, 0.4042, 0.2467, 0.0000])
2025-08-09T13:25:03.700678 - timesteps: tensor([999, 996, 992, 988, 984, 979, 974, 968, 963, 956, 949, 942, 934, 925, 915, 904, 892, 878, 863, 846, 826, 802, 775, 742, 703, 655, 593, 513, 404, 246], device='cuda:0')
2025-08-09T13:25:07.278850 - FETCH ComfyRegistry Data: 30/932025-08-09T13:25:07.279851 -
2025-08-09T13:25:11.171898 - FETCH ComfyRegistry Data: 35/932025-08-09T13:25:11.171898 -
2025-08-09T13:25:15.771994 - Seq len: 27280
2025-08-09T13:25:15.781593 - Sampling 121 frames at 1280x704 with 30 steps
2025-08-09T13:25:16.148048 -
0%| | 0/30 [00:00<?, ?it/s]2025-08-09T13:25:16.555465 - FETCH ComfyRegistry Data: 40/932025-08-09T13:25:16.555465 -
2025-08-09T13:25:16.755363 - Error during model prediction: SM89 kernel is not available. Make sure you GPUs with compute capability 8.9.
2025-08-09T13:25:19.659468 -
0%| | 0/30 [00:03<?, ?it/s]2025-08-09T13:25:19.660469 -
2025-08-09T13:25:19.768703 - !!! Exception during processing !!! SM89 kernel is not available. Make sure you GPUs with compute capability 8.9.
2025-08-09T13:25:19.878905 - Traceback (most recent call last):
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 496, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 315, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 289, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\execution.py", line 277, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 3069, in process
noise_pred, self.cache_state = predict_with_cfg(
^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2396, in predict_with_cfg
raise e
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\nodes.py", line 2297, in predict_with_cfg
noise_pred_cond, cache_state_cond = transformer(
^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 1822, in forward
x = block(x, **kwargs)
^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 749, in forward
y = self.self_attn.forward(q, k, v, seq_lens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\model.py", line 295, in forward
x = attention(q, k, v, k_lens=seq_lens, attention_mode=attention_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\attention.py", line 199, in attention
return sageattn_func(q, k, v, tensor_layout="NHD").contiguous()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 929, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\ComfyUI\custom_nodes\ComfyUI-WanVideoWrapper\wanvideo\modules\attention.py", line 26, in sageattn_func
return sageattn(q, k, v, attn_mask=attn_mask, dropout_p=dropout_p, is_causal=is_causal, tensor_layout=tensor_layout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\sageattention\core.py", line 148, in sageattn
return sageattn_qk_int8_pv_fp8_cuda(q, k, v, tensor_layout=tensor_layout, is_causal=is_causal, sm_scale=sm_scale, return_lse=return_lse, pv_accum_dtype="fp32+fp16")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\torch\_dynamo\eval_frame.py", line 929, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUISage\ComfyUI-Easy-Install\python_embeded\Lib\site-packages\sageattention\core.py", line 682, in sageattn_qk_int8_pv_fp8_cuda
assert SM89_ENABLED, "SM89 kernel is not available. Make sure you GPUs with compute capability 8.9."
^^^^^^^^^^^^
AssertionError: SM89 kernel is not available. Make sure you GPUs with compute capability 8.9.
2025-08-09T13:25:19.884905 - Prompt executed in 31.62 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
Additional Context
(Please add any additional context or steps to reproduce the error here)
@vertstream looks like a sage attention issue, and it's strange because your 4090 clearly has compute capability 8.9. Maybe create an issue in https://github.com/thu-ml/SageAttention?
Meanwhile you can avoid this by disabling sage attention in your workflow.
@vertstream looks like a sage attention issue, and it's strange because your 4090 clearly has compute capability 8.9. Maybe create an issue in https://github.com/thu-ml/SageAttention?
Meanwhile you can avoid this by disabling sage attention in your workflow.
Thanks, confirmed. Yes did work without sage. Did a fresh comfy install and downgraded to 2.7.1 and now itโs running. Thanks
@vertstream looks like a sage attention issue, and it's strange because your 4090 clearly has compute capability 8.9. Maybe create an issue in https://github.com/thu-ml/SageAttention? Meanwhile you can avoid this by disabling sage attention in your workflow.
Thanks, confirmed. Yes did work without sage. Did a fresh comfy install and downgraded to 2.7.1 and now itโs running. Thanks
hi ,how did you solve it?i work on linux with rtx 3060 and same error
ๆ็จ็sageattention2.2.0๏ผtriton-3.0.0 ๆฅ้๏ผSM89 kernel is not available. Make sure you GPUs with compute capability 8.9. ๆsageattentionๆขๆ2.1.1๏ผ้ฎ้ข่งฃๅณไบ