ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

"ValueError: not enough values to unpack (expected 5, got 4)" when using lighttaew2_1

Open MrSeri0us opened this issue 2 months ago • 4 comments

Custom Node Testing

Your question

Hi, I'm trying to get animated sampler previews with WAN 2.2 using lighttaew2_1.safetensors but I'm getting ValueError: not enough values to unpack (expected 5, got 4)

I made a clean install of ComfyUI and created a basic workflow, and static previews are shown during the generation with no error, using ComfyUI-VideoHelperSuite the error appears.

My questions are:

Am I supposed to be able to get animated previews with lightvaew2_1.safetensors without using ComfyUI-VideoHelperSuite? Am I doing something wrong?

Thanks in advance


E:\tmp\ComfyUI>python main.py --windows-standalone-build --fast fp16_accumulation --disable-auto-launch --disable-smart-memory --use-sage-attention --preview-method taesd --disable-api-nodes [START] Security scan [DONE] Security scan

ComfyUI-Manager: installing dependencies done.

** ComfyUI startup time: 2025-12-03 11:28:28.329 ** Platform: Windows ** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] ** Python executable: C:\Users\Mr. Seri0us\AppData\Local\Programs\Python\Python312\python.exe ** ComfyUI Path: E:\tmp\ComfyUI ** ComfyUI Base Folder Path: E:\tmp\ComfyUI ** User directory: E:\tmp\ComfyUI\user ** ComfyUI-Manager config path: E:\tmp\ComfyUI\user__manager\config.ini ** Log path: E:\tmp\ComfyUI\user\comfyui.log

Prestartup times for custom nodes: 5.4 seconds: E:\tmp\ComfyUI\custom_nodes\ComfyUI-Manager

Checkpoint files will always be loaded safely. Total VRAM 12282 MB, total RAM 65444 MB pytorch version: 2.7.1+cu128 xformers version: 0.0.31.post1 Enabled fp16 accumulation. Set vram state to: NORMAL_VRAM Disabling smart memory management Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync Using async weight offloading with 2 streams Enabled pinned memory 29449.0 Using sage attention Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] ComfyUI version: 0.3.76 ComfyUI frontend version: 1.33.10 [Prompt Server] web root: C:\Users\Mr. Seri0us\AppData\Local\Programs\Python\Python312\Lib\site-packages\comfyui_frontend_package\static Total VRAM 12282 MB, total RAM 65444 MB pytorch version: 2.7.1+cu128 xformers version: 0.0.31.post1 Enabled fp16 accumulation. Set vram state to: NORMAL_VRAM Disabling smart memory management Device: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync Using async weight offloading with 2 streams Enabled pinned memory 29449.0

Loading: ComfyUI-Manager (V3.38)

[ComfyUI-Manager] network_mode: public [ComfyUI-Manager] Since --preview-method is set, ComfyUI-Manager's preview method feature will be ignored.

ComfyUI Version: v0.3.76-14-g519c9411 | Released on '2025-12-03'

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json

Import times for custom nodes: 0.0 seconds: E:\tmp\ComfyUI\custom_nodes\websocket_image_save.py 0.2 seconds: E:\tmp\ComfyUI\custom_nodes\comfyui-videohelpersuite 0.4 seconds: E:\tmp\ComfyUI\custom_nodes\ComfyUI-Manager

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json Context impl SQLiteImpl. Will assume non-transactional DDL. No target revision found. Starting server

To see the GUI go to: http://127.0.0.1:8188 FETCH ComfyRegistry Data: 5/109 FETCH ComfyRegistry Data: 10/109 FETCH ComfyRegistry Data: 15/109 FETCH ComfyRegistry Data: 20/109 got prompt Using scaled fp8: fp8 matrix mult: True, scale input: True model weight dtype torch.float16, manual cast: None model_type FLOW FETCH ComfyRegistry Data: 25/109 Using xformers attention in VAE Using xformers attention in VAE VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Using scaled fp8: fp8 matrix mult: False, scale input: False CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 FETCH ComfyRegistry Data: 30/109 Requested to load WanTEModel loaded completely; 9616.80 MB usable, 6419.48 MB loaded, full load: True FETCH ComfyRegistry Data: 35/109 Requested to load WanVAE 0 models unloaded. loaded partially; 0.00 MB usable, 0.00 MB loaded, 242.00 MB offloaded, 22.78 MB buffer reserved, lowvram patches: 0 FETCH ComfyRegistry Data: 40/109 FETCH ComfyRegistry Data: 45/109 VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 Requested to load WAN21 loaded partially; 6624.12 MB usable, 6549.10 MB loaded, 7079.97 MB offloaded, 75.01 MB buffer reserved, lowvram patches: 0 0%| | 0/1 [00:00<?, ?it/s]FETCH ComfyRegistry Data: 50/109 FETCH ComfyRegistry Data: 55/109 Requested to load TAEHV FETCH ComfyRegistry Data: 60/109 loaded completely; 7202.61 MB usable, 21.58 MB loaded, full load: True 0%| | 0/1 [00:10<?, ?it/s] !!! Exception during processing !!! not enough values to unpack (expected 5, got 4) Traceback (most recent call last): File "E:\tmp\ComfyUI\execution.py", line 515, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\execution.py", line 329, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\execution.py", line 303, in _async_map_node_over_list await process_inputs(input_dict, i) File "E:\tmp\ComfyUI\execution.py", line 291, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "E:\tmp\ComfyUI\nodes.py", line 1572, in sample return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\nodes.py", line 1505, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\samplers.py", line 997, in outer_sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\samplers.py", line 980, in inner_sample samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\samplers.py", line 752, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Mr. Seri0us\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\k_diffusion\sampling.py", line 202, in sample_euler callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) File "E:\tmp\ComfyUI\comfy\samplers.py", line 750, in k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\latent_preview.py", line 124, in callback preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\custom_nodes\comfyui-videohelpersuite\videohelpersuite\latent_preview.py", line 55, in decode_latent_to_preview_image num_images)).run() ^^^^^ File "C:\Users\Mr. Seri0us\AppData\Local\Programs\Python\Python312\Lib\threading.py", line 1012, in run self._target(*self._args, **self._kwargs) File "E:\tmp\ComfyUI\custom_nodes\comfyui-videohelpersuite\videohelpersuite\latent_preview.py", line 59, in process_previews image_tensor = self.decode_latent_to_preview(image_tensor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\custom_nodes\comfyui-videohelpersuite\videohelpersuite\latent_preview.py", line 85, in decode_latent_to_preview x_sample = self.taesd.decode(x0).movedim(1, 3) ^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\sd.py", line 753, in decode out = self.process_output(self.first_stage_model.decode(samples, **vae_options).to(self.output_device).float()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\taesd\taehv.py", line 169, in decode x = apply_model_with_memblocks(self.decoder, x, self.parallel, self.show_progress_bar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\tmp\ComfyUI\comfy\taesd\taehv.py", line 52, in apply_model_with_memblocks B, T, C, H, W = x.shape ^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 5, got 4)

Prompt executed in 29.32 seconds

Logs


Other

No response

MrSeri0us avatar Dec 03 '25 10:12 MrSeri0us

have the same issue.

Elzoz92 avatar Dec 07 '25 03:12 Elzoz92

have the same issue, but got it from KSampler not enough values to unpack (expected 2, got 1) Logs below if it can help

2025-12-09T14:16:51.114818 - Adding extra search path custom_nodes [USER_DOCS]\custom_nodes 2025-12-09T14:16:51.114818 - Adding extra search path download_model_base [USER_DOCS]\models 2025-12-09T14:16:51.114818 - Adding extra search path custom_nodes [COMFY_INSTALL]\custom_nodes 2025-12-09T14:16:51.114818 - Setting output directory to: [USER_DOCS]\output 2025-12-09T14:16:51.114818 - Setting input directory to: [USER_DOCS]\input 2025-12-09T14:16:51.114818 - Setting user directory to: [USER_DOCS]\user 2025-12-09T14:16:52.503303 - [START] Security scan2025-12-09T14:16:52.503303 - 2025-12-09T14:16:54.609027 - [DONE] Security scan2025-12-09T14:16:54.609027 - 2025-12-09T14:16:54.851492 - ## ComfyUI-Manager: installing dependencies done.2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** ComfyUI startup time:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - 2025-12-09 14:16:54.8512025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** Platform:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - Windows2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** Python version:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)]2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** Python executable:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - [USER_DOCS].venv\Scripts\python.exe2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** ComfyUI Path:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - [COMFY_INSTALL]2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** ComfyUI Base Folder Path:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - [COMFY_INSTALL]2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** User directory:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - [USER_DOCS]\user2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** ComfyUI-Manager config path:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - [USER_DOCS]\user\default\ComfyUI-Manager\config.ini2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - ** Log path:2025-12-09T14:16:54.851492 - 2025-12-09T14:16:54.851492 - [USER_DOCS]\user\comfyui.log2025-12-09T14:16:54.851492 - 2025-12-09T14:16:55.652896 - [ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated. 2025-12-09T14:16:55.653896 - Prestartup times for custom nodes: 2025-12-09T14:16:55.653896 - 4.5 seconds: [COMFY_INSTALL]\custom_nodes\ComfyUI-Manager 2025-12-09T14:16:55.653896 - 2025-12-09T14:17:03.142178 - Checkpoint files will always be loaded safely. 2025-12-09T14:17:03.311323 - Total VRAM 8188 MB, total RAM 65221 MB 2025-12-09T14:17:03.311323 - pytorch version: 2.8.0+cu129 2025-12-09T14:17:03.311323 - Set vram state to: NORMAL_VRAM 2025-12-09T14:17:03.312292 - Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync 2025-12-09T14:17:03.341125 - Using async weight offloading with 2 streams 2025-12-09T14:17:03.345142 - Enabled pinned memory 29349.0 2025-12-09T14:17:06.954365 - Using pytorch attention 2025-12-09T14:17:13.314612 - Python version: 3.12.11 (main, Aug 18 2025, 19:17:54) [MSC v.1944 64 bit (AMD64)] 2025-12-09T14:17:13.314612 - ComfyUI version: 0.3.76 2025-12-09T14:17:13.341824 - [Prompt Server] web root: [COMFY_INSTALL]\web_custom_versions\desktop_app 2025-12-09T14:17:14.767501 - Total VRAM 8188 MB, total RAM 65221 MB 2025-12-09T14:17:14.767501 - pytorch version: 2.8.0+cu129 2025-12-09T14:17:14.767501 - Set vram state to: NORMAL_VRAM 2025-12-09T14:17:14.768501 - Device: cuda:0 NVIDIA GeForce RTX 4060 Laptop GPU : cudaMallocAsync 2025-12-09T14:17:14.796691 - Using async weight offloading with 2 streams 2025-12-09T14:17:14.800684 - Enabled pinned memory 29349.0 2025-12-09T14:17:15.343429 - ### Loading: ComfyUI-Manager (V3.36) 2025-12-09T14:17:15.344432 - [ComfyUI-Manager] network_mode: public 2025-12-09T14:17:15.344432 - ### ComfyUI Revision: UNKNOWN (The currently installed ComfyUI is not a Git repository) 2025-12-09T14:17:15.376429 - Import times for custom nodes: 2025-12-09T14:17:15.376429 - 0.0 seconds: [COMFY_INSTALL]\custom_nodes\websocket_image_save.py 2025-12-09T14:17:15.376429 - 0.2 seconds: [COMFY_INSTALL]\custom_nodes\ComfyUI-Manager 2025-12-09T14:17:15.376429 - 2025-12-09T14:17:15.450842 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json 2025-12-09T14:17:15.454842 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json 2025-12-09T14:17:15.490072 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json 2025-12-09T14:17:15.537204 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json 2025-12-09T14:17:15.579205 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 2025-12-09T14:17:16.024915 - Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file (Background on this error at: https://sqlalche.me/e/20/e3q8) 2025-12-09T14:17:16.117650 - Starting server

2025-12-09T14:17:16.118650 - To see the GUI go to: http://<IP_ADDRESS>:8000 2025-12-09T14:17:18.624604 - FETCH ComfyRegistry Data: 5/1112025-12-09T14:17:18.624604 - 2025-12-09T14:17:18.750396 - comfyui-frontend-package not found in requirements.txt 2025-12-09T14:17:19.140036 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-12-09T14:17:19.146034 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /extensions/core/groupNode.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-12-09T14:17:19.283423 - comfyui-frontend-package not found in requirements.txt 2025-12-09T14:17:19.407354 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/buttonGroup.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-12-09T14:17:19.430621 - [DEPRECATION WARNING] Detected import of deprecated legacy API: /scripts/ui/components/button.js. This is likely caused by a custom node extension using outdated APIs. Please update your extensions or contact the extension author for an updated version. 2025-12-09T14:17:22.359907 - FETCH ComfyRegistry Data: 10/1112025-12-09T14:17:22.359907 - 2025-12-09T14:17:25.764004 - FETCH ComfyRegistry Data: 15/1112025-12-09T14:17:25.764004 - 2025-12-09T14:17:29.168462 - FETCH ComfyRegistry Data: 20/1112025-12-09T14:17:29.168462 - 2025-12-09T14:17:32.561063 - FETCH ComfyRegistry Data: 25/1112025-12-09T14:17:32.561063 - 2025-12-09T14:17:35.995328 - FETCH ComfyRegistry Data: 30/1112025-12-09T14:17:35.995328 - 2025-12-09T14:17:40.094532 - FETCH ComfyRegistry Data: 35/1112025-12-09T14:17:40.094532 - 2025-12-09T14:17:44.031965 - FETCH ComfyRegistry Data: 40/1112025-12-09T14:17:44.031965 - 2025-12-09T14:17:47.436179 - FETCH ComfyRegistry Data: 45/1112025-12-09T14:17:47.436179 - 2025-12-09T14:17:50.900068 - FETCH ComfyRegistry Data: 50/1112025-12-09T14:17:50.900068 - 2025-12-09T14:17:54.301471 - FETCH ComfyRegistry Data: 55/1112025-12-09T14:17:54.301471 - 2025-12-09T14:17:58.244848 - FETCH ComfyRegistry Data: 60/1112025-12-09T14:17:58.244848 - 2025-12-09T14:18:01.647495 - FETCH ComfyRegistry Data: 65/1112025-12-09T14:18:01.647495 - 2025-12-09T14:18:05.079171 - FETCH ComfyRegistry Data: 70/1112025-12-09T14:18:05.079171 - 2025-12-09T14:18:09.506694 - FETCH ComfyRegistry Data: 75/1112025-12-09T14:18:09.506694 - 2025-12-09T14:18:12.929383 - FETCH ComfyRegistry Data: 80/1112025-12-09T14:18:12.929383 - 2025-12-09T14:18:16.393785 - FETCH ComfyRegistry Data: 85/1112025-12-09T14:18:16.393785 - 2025-12-09T14:18:19.844282 - FETCH ComfyRegistry Data: 90/1112025-12-09T14:18:19.844282 - 2025-12-09T14:18:23.649232 - FETCH ComfyRegistry Data: 95/1112025-12-09T14:18:23.649232 - 2025-12-09T14:18:28.517347 - FETCH ComfyRegistry Data: 100/1112025-12-09T14:18:28.517347 - 2025-12-09T14:18:32.019581 - FETCH ComfyRegistry Data: 105/1112025-12-09T14:18:32.019581 - 2025-12-09T14:18:35.427294 - FETCH ComfyRegistry Data: 110/1112025-12-09T14:18:35.427294 - 2025-12-09T14:18:36.608769 - FETCH ComfyRegistry Data [DONE]2025-12-09T14:18:36.608769 - 2025-12-09T14:18:36.725143 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes 2025-12-09T14:18:36.741066 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-12-09T14:18:36.741066 - 2025-12-09T14:18:36.845468 - [DONE]2025-12-09T14:18:36.845468 - 2025-12-09T14:18:36.875477 - [ComfyUI-Manager] broken item:{'author': 'rjgoif', 'title': 'Img Label Tools', 'id': 'Img-Label-Tools', 'reference': 'https://github.com/rjgoif/ComfyUI-Img-Label-Tools', 'install_type': 'git-clone', 'description': 'Tools to help annotate images for sharing on Reddit, Discord, etc.'} 2025-12-09T14:18:36.899436 - [ComfyUI-Manager] All startup tasks have been completed. 2025-12-09T14:24:28.871085 - got prompt 2025-12-09T14:24:29.314546 - model weight dtype torch.float16, manual cast: None 2025-12-09T14:24:29.315546 - model_type FLOW 2025-12-09T14:24:31.996006 - VAE load device: cuda:0, offload device: cpu, dtype: torch.float16 2025-12-09T14:24:32.772938 - Requested to load Dinov2Model 2025-12-09T14:24:33.310695 - loaded completely; 5676.80 MB usable, 577.86 MB loaded, full load: True 2025-12-09T14:24:34.292667 - Requested to load Hunyuan3Dv2_1 2025-12-09T14:24:35.464381 - loaded partially; 5634.67 MB usable, 5610.67 MB loaded, 208.19 MB offloaded, 24.00 MB buffer reserved, lowvram patches: 0 2025-12-09T14:24:35.500482 - 0%| | 0/30 [00:00<?, ?it/s]2025-12-09T14:24:35.505503 - 0%| | 0/30 [00:00<?, ?it/s]2025-12-09T14:24:35.505503 - 2025-12-09T14:24:35.519127 - !!! Exception during processing !!! not enough values to unpack (expected 2, got 1) 2025-12-09T14:24:35.581136 - Traceback (most recent call last): File "[COMFY_INSTALL]\execution.py", line 510, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\execution.py", line 324, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\execution.py", line 298, in _async_map_node_over_list await process_inputs(input_dict, i) File "[COMFY_INSTALL]\execution.py", line 286, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "[COMFY_INSTALL]\nodes.py", line 1535, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\nodes.py", line 1502, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\sample.py", line 60, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 1163, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 1053, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 1035, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 997, in outer_sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 980, in inner_sample samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 752, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[USER_DOCS].venv\Lib\site-packages\torch\utils_contextlib.py", line 120, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\k_diffusion\sampling.py", line 199, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 401, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 953, in call return self.outer_predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 960, in outer_predict_noise ).execute(x, timestep, model_options, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 963, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 381, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 206, in calc_cond_batch return _calc_cond_batch_outer(model, conds, x_in, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 214, in _calc_cond_batch_outer return executor.execute(model, conds, x_in, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\samplers.py", line 326, in calc_cond_batch output = model.apply_model(input_x, timestep, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\model_base.py", line 161, in apply_model return comfy.patcher_extension.WrapperExecutor.new_class_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\model_base.py", line 203, in _apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[USER_DOCS].venv\Lib\site-packages\torch\nn\modules\module.py", line 1773, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[USER_DOCS].venv\Lib\site-packages\torch\nn\modules\module.py", line 1784, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[COMFY_INSTALL]\comfy\ldm\hunyuan3dv2_1\hunyuandit.py", line 608, in forward uncond_emb, cond_emb = context.chunk(2, dim = 0) ^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1)

2025-12-09T14:24:35.584056 - Prompt executed in 6.71 seconds

sropqras avatar Dec 09 '25 14:12 sropqras

have you install any custom node if yes check the input and output parameters of the nodes

Vijay2359 avatar Dec 10 '25 10:12 Vijay2359

Thanks Vijay, I was using the Hunyuan3D Image to Object, reducing the image size did help and output was rendered without the error, however the output mesh wasn't very useable that Blender could polish, rather really low res, I would like to understand how the parameters can be set to not overload and get this error, I have an i7 HX build with SSDs (2 sys, and data), 64GB ram 6gb RTX 4050

sropqras avatar Dec 10 '25 11:12 sropqras