Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (TrainLoraNode)
Custom Node Testing
- [x] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
Train lora with a model, use gpu/ram or cpu/ram, but it uses gpu/cpu/ram which break the node.
Actual Behavior
Steps to Reproduce
Train Lora node, doesn't work
Debug Logs
# ComfyUI Error Report
## Error Details
- **Node ID:** 1
- **Node Type:** TrainLoraNode
- **Exception Type:** RuntimeError
- **Exception Message:** Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
## Stack Trace
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 292, in _async_map_node_over_list
await process_inputs(input_data_all, 0, input_is_list=input_is_list)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_train.py", line 627, in execute
guider.sample(
~~~~~~~~~~~~~^
noise.generate_noise({"samples": latents}),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
seed=noise.seed,
^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 997, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 980, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_train.py", line 160, in sample
loss = self.fwd_bwd(
model_wrap,
...<7 lines>...
bwd=True,
)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_train.py", line 107, in fwd_bwd
x0_pred = model_wrap(
xt.requires_grad_(True),
batch_sigmas.requires_grad_(True),
**batch_extra_args,
)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 953, in __call__
return self.outer_predict_noise(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 960, in outer_predict_noise
).execute(x, timestep, model_options, seed)
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 963, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 381, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch
return _calc_cond_batch_outer(model, conds, x_in, timestep, model_options)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 214, in _calc_cond_batch_outer
return executor.execute(model, conds, x_in, timestep, model_options)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 326, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 161, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...<2 lines>...
comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.APPLY_MODEL, transformer_options)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
).execute(x, t, c_concat, c_crossattn, control, transformer_options, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 203, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lumina\model.py", line 548, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...<2 lines>...
comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.DIFFUSION_MODEL, kwargs.get("transformer_options", {}))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
).execute(x, timesteps, context, num_tokens, attention_mask, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lumina\model.py", line 567, in _forward
t = self.t_embedder(t * self.time_scale, dtype=x.dtype) # (N, D)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\mmdit.py", line 227, in forward
t_emb = self.mlp(t_freq)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 164, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 156, in forward_comfy_cast_weights
weight, bias, offload_stream = cast_bias_weight(self, input, offloadable=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 123, in cast_bias_weight
weight = f(weight)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\weight_adapter\lora.py", line 51, in __call__
weight = w + scale * diff.reshape(w.shape)
~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## System Information
- **ComfyUI Version:** 0.3.75
- **Arguments:** ComfyUI\main.py
- **OS:** nt
- **Python Version:** 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.9.1+cu130
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4080 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 17170956288
- **VRAM Free:** 15776874496
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-11-27T15:54:35.672691 - [START] Security scan2025-11-27T15:54:35.672711 -
2025-11-27T15:54:36.380293 - [DONE] Security scan2025-11-27T15:54:36.380306 -
2025-11-27T15:54:36.445516 - ## ComfyUI-Manager: installing dependencies done.2025-11-27T15:54:36.445607 -
2025-11-27T15:54:36.445659 - ** ComfyUI startup time:2025-11-27T15:54:36.445690 - 2025-11-27T15:54:36.445716 - 2025-11-27 15:54:36.4452025-11-27T15:54:36.445740 -
2025-11-27T15:54:36.445769 - ** Platform:2025-11-27T15:54:36.445795 - 2025-11-27T15:54:36.445817 - Windows2025-11-27T15:54:36.445840 -
2025-11-27T15:54:36.445863 - ** Python version:2025-11-27T15:54:36.445885 - 2025-11-27T15:54:36.445907 - 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]2025-11-27T15:54:36.445930 -
2025-11-27T15:54:36.445953 - ** Python executable:2025-11-27T15:54:36.445975 - 2025-11-27T15:54:36.445996 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\python.exe2025-11-27T15:54:36.446019 -
2025-11-27T15:54:36.446041 - ** ComfyUI Path:2025-11-27T15:54:36.446062 - 2025-11-27T15:54:36.446084 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI2025-11-27T15:54:36.446106 -
2025-11-27T15:54:36.446127 - ** ComfyUI Base Folder Path:2025-11-27T15:54:36.446147 - 2025-11-27T15:54:36.446166 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI2025-11-27T15:54:36.446189 -
2025-11-27T15:54:36.446210 - ** User directory:2025-11-27T15:54:36.446230 - 2025-11-27T15:54:36.446388 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\user2025-11-27T15:54:36.446445 -
2025-11-27T15:54:36.446477 - ** ComfyUI-Manager config path:2025-11-27T15:54:36.446494 - 2025-11-27T15:54:36.446511 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-11-27T15:54:36.446534 -
2025-11-27T15:54:36.446556 - ** Log path:2025-11-27T15:54:36.446574 - 2025-11-27T15:54:36.446591 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-11-27T15:54:36.446609 -
2025-11-27T15:54:37.125247 -
Prestartup times for custom nodes:
2025-11-27T15:54:37.125448 - 1.8 seconds: C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-11-27T15:54:37.125537 -
2025-11-27T15:54:37.526637 - C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import pynvml # type: ignore[import]
2025-11-27T15:54:38.096280 - Checkpoint files will always be loaded safely.
2025-11-27T15:54:38.178652 - Total VRAM 16376 MB, total RAM 130408 MB
2025-11-27T15:54:38.178987 - pytorch version: 2.9.1+cu130
2025-11-27T15:54:38.179522 - Set vram state to: NORMAL_VRAM
2025-11-27T15:54:38.179828 - Device: cuda:0 NVIDIA GeForce RTX 4080 : cudaMallocAsync
2025-11-27T15:54:38.190319 - Enabled pinned memory 58683.0
2025-11-27T15:54:38.201994 - working around nvidia conv3d memory bug.
2025-11-27T15:54:38.775255 - Using pytorch attention
2025-11-27T15:54:39.763361 - Python version: 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
2025-11-27T15:54:39.763501 - ComfyUI version: 0.3.75
2025-11-27T15:54:39.775817 - ComfyUI frontend version: 1.32.9
2025-11-27T15:54:39.777006 - [Prompt Server] web root: C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
2025-11-27T15:54:40.181448 - Total VRAM 16376 MB, total RAM 130408 MB
2025-11-27T15:54:40.181578 - pytorch version: 2.9.1+cu130
2025-11-27T15:54:40.181815 - Set vram state to: NORMAL_VRAM
2025-11-27T15:54:40.181974 - Device: cuda:0 NVIDIA GeForce RTX 4080 : cudaMallocAsync
2025-11-27T15:54:40.191652 - Enabled pinned memory 58683.0
2025-11-27T15:54:40.489605 - ### Loading: ComfyUI-Manager (V3.37.1)
2025-11-27T15:54:40.490055 - [ComfyUI-Manager] network_mode: public
2025-11-27T15:54:40.578496 - ### ComfyUI Version: v0.3.75-12-geaf68c9b | Released on '2025-11-26'
2025-11-27T15:54:40.586867 -
Import times for custom nodes:
2025-11-27T15:54:40.587126 - 0.0 seconds: C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-11-27T15:54:40.587430 - 0.1 seconds: C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-11-27T15:54:40.587550 -
2025-11-27T15:54:40.799535 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-11-27T15:54:40.818647 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-11-27T15:54:40.844776 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-11-27T15:54:40.872920 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-11-27T15:54:40.879314 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-11-27T15:54:41.208786 - Context impl SQLiteImpl.
2025-11-27T15:54:41.208941 - Will assume non-transactional DDL.
2025-11-27T15:54:41.209674 - No target revision found.
2025-11-27T15:54:41.228383 - Starting server
2025-11-27T15:54:41.228728 - To see the GUI go to: http://127.0.0.1:8188
2025-11-27T15:54:44.968239 - FETCH ComfyRegistry Data: 5/1082025-11-27T15:54:44.968518 -
2025-11-27T15:54:48.359568 - FETCH ComfyRegistry Data: 10/1082025-11-27T15:54:48.359828 -
2025-11-27T15:54:51.763153 - FETCH ComfyRegistry Data: 15/1082025-11-27T15:54:51.763370 -
2025-11-27T15:54:53.801929 - got prompt
2025-11-27T15:54:53.807461 - Loading 1 shards from C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\output\z-skill...
2025-11-27T15:54:53.830482 - Loaded shard_0000.pkl: 24 samples
2025-11-27T15:54:53.830609 - Successfully loaded 24 samples from C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\output\z-skill.
2025-11-27T15:54:53.854825 - model weight dtype torch.bfloat16, manual cast: None
2025-11-27T15:54:53.855417 - model_type FLOW
2025-11-27T15:54:54.843442 - unet missing: ['norm_final.weight']
2025-11-27T15:54:55.527037 - FETCH ComfyRegistry Data: 20/1082025-11-27T15:54:55.527238 -
2025-11-27T15:54:55.670470 - Latent shapes: {torch.Size([1, 16, 64, 64])}
2025-11-27T15:54:55.671822 - Total Images: 24, Total Captions: 24
2025-11-27T15:54:55.833782 - Requested to load Lumina2
2025-11-27T15:54:55.834291 - 0 models unloaded.
2025-11-27T15:54:58.385073 - loaded completely; 95367431640625005117571072.00 MB usable, 11783.35 MB loaded, full load: True
2025-11-27T15:54:58.430709 - 0 models unloaded.
2025-11-27T15:54:58.772917 - loaded partially: 10646.06 MB loaded, lowvram patches: 0
2025-11-27T15:54:58.829857 -
Training LoRA: 0%| | 0/100 [00:00<?, ?it/s]2025-11-27T15:54:58.872086 -
Training LoRA: 0%| | 0/100 [00:00<?, ?it/s]2025-11-27T15:54:58.872191 -
2025-11-27T15:54:58.898911 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
2025-11-27T15:54:58.902774 - Traceback (most recent call last):
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 510, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 324, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 292, in _async_map_node_over_list
await process_inputs(input_data_all, 0, input_is_list=input_is_list)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 286, in process_inputs
result = f(**inputs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_api\internal\__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_api\latest\_io.py", line 1275, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_train.py", line 627, in execute
guider.sample(
~~~~~~~~~~~~~^
noise.generate_noise({"samples": latents}),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
seed=noise.seed,
^^^^^^^^^^^^^^^^
)
^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1035, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 997, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 980, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_train.py", line 160, in sample
loss = self.fwd_bwd(
model_wrap,
...<7 lines>...
bwd=True,
)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_train.py", line 107, in fwd_bwd
x0_pred = model_wrap(
xt.requires_grad_(True),
batch_sigmas.requires_grad_(True),
**batch_extra_args,
)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 953, in __call__
return self.outer_predict_noise(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 960, in outer_predict_noise
).execute(x, timestep, model_options, seed)
~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 963, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 381, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch
return _calc_cond_batch_outer(model, conds, x_in, timestep, model_options)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 214, in _calc_cond_batch_outer
return executor.execute(model, conds, x_in, timestep, model_options)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 326, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 161, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...<2 lines>...
comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.APPLY_MODEL, transformer_options)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
).execute(x, t, c_concat, c_crossattn, control, transformer_options, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 203, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lumina\model.py", line 548, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...<2 lines>...
comfy.patcher_extension.get_all_wrappers(comfy.patcher_extension.WrappersMP.DIFFUSION_MODEL, kwargs.get("transformer_options", {}))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
).execute(x, timesteps, context, num_tokens, attention_mask, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lumina\model.py", line 567, in _forward
t = self.t_embedder(t * self.time_scale, dtype=x.dtype) # (N, D)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\mmdit.py", line 227, in forward
t_emb = self.mlp(t_freq)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 164, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 156, in forward_comfy_cast_weights
weight, bias, offload_stream = cast_bias_weight(self, input, offloadable=True)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 123, in cast_bias_weight
weight = f(weight)
File "C:\Users\Mr.Frosty\Documents\ComfyUI_windows_portable\ComfyUI\comfy\weight_adapter\lora.py", line 51, in __call__
weight = w + scale * diff.reshape(w.shape)
~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
2025-11-27T15:54:58.906174 - Prompt executed in 5.10 seconds
2025-11-27T15:54:59.044555 - FETCH ComfyRegistry Data: 25/1082025-11-27T15:54:59.044731 -
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"id":"9ab275bd-5353-4184-a58b-17b899810b86","revision":0,"last_node_id":18,"last_link_id":23,"nodes":[{"id":11,"type":"ImageScale","pos":[-52.865373318142254,123.03237142558419],"size":[270,130],"flags":{},"order":6,"mode":4,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":11},{"localized_name":"upscale_method","name":"upscale_method","type":"COMBO","widget":{"name":"upscale_method"},"link":null},{"localized_name":"width","name":"width","type":"INT","widget":{"name":"width"},"link":null},{"localized_name":"height","name":"height","type":"INT","widget":{"name":"height"},"link":null},{"localized_name":"crop","name":"crop","type":"COMBO","widget":{"name":"crop"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[12]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"ImageScale"},"widgets_values":["nearest-exact",512,512,"disabled"]},{"id":9,"type":"LoadImageDataSetFromFolder","pos":[-457.25599805608033,156.11762348362015],"size":[301.4439453125,58],"flags":{},"order":0,"mode":4,"inputs":[{"localized_name":"folder","name":"folder","type":"COMBO","widget":{"name":"folder"},"link":null}],"outputs":[{"localized_name":"images","name":"images","shape":6,"type":"IMAGE","links":[11]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"LoadImageDataSetFromFolder"},"widgets_values":["z-image-dataset"]},{"id":5,"type":"VAELoader","pos":[-140.5306804235566,324.1021279787816],"size":[270,58],"flags":{},"order":1,"mode":4,"inputs":[{"localized_name":"vae_name","name":"vae_name","type":"COMBO","widget":{"name":"vae_name"},"link":null}],"outputs":[{"localized_name":"VAE","name":"VAE","type":"VAE","links":[6]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"VAELoader"},"widgets_values":["z-image_vae.safetensors"]},{"id":16,"type":"CLIPLoader","pos":[-148.1673979545318,457.006117625686],"size":[270,106],"flags":{},"order":2,"mode":4,"inputs":[{"localized_name":"clip_name","name":"clip_name","type":"COMBO","widget":{"name":"clip_name"},"link":null},{"localized_name":"type","name":"type","type":"COMBO","widget":{"name":"type"},"link":null},{"localized_name":"device","name":"device","shape":7,"type":"COMBO","widget":{"name":"device"},"link":null}],"outputs":[{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":[18]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"CLIPLoader"},"widgets_values":["qwen_3_4b.safetensors","lumina2","default"]},{"id":3,"type":"SaveTrainingDataset","pos":[-32.200634164296204,634.6668855377093],"size":[270,102],"flags":{},"order":10,"mode":4,"inputs":[{"localized_name":"latents","name":"latents","type":"LATENT","link":1},{"localized_name":"conditioning","name":"conditioning","type":"CONDITIONING","link":2},{"localized_name":"folder_name","name":"folder_name","type":"STRING","widget":{"name":"folder_name"},"link":null},{"localized_name":"shard_size","name":"shard_size","type":"INT","widget":{"name":"shard_size"},"link":null}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"SaveTrainingDataset"},"widgets_values":["z-skill",10000]},{"id":2,"type":"MakeTrainingDataset","pos":[-128.5999678624224,198.69801784113525],"size":[270,98],"flags":{},"order":8,"mode":4,"inputs":[{"localized_name":"images","name":"images","type":"IMAGE","link":12},{"localized_name":"vae","name":"vae","type":"VAE","link":6},{"localized_name":"clip","name":"clip","type":"CLIP","link":18},{"localized_name":"texts","name":"texts","shape":7,"type":"STRING","widget":{"name":"texts"},"link":null}],"outputs":[{"localized_name":"latents","name":"latents","shape":6,"type":"LATENT","links":[1]},{"localized_name":"conditioning","name":"conditioning","shape":6,"type":"CONDITIONING","links":[2]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"MakeTrainingDataset"},"widgets_values":[""]},{"id":12,"type":"UNETLoader","pos":[655.8918408349118,37.625339393660326],"size":[270,82],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"unet_name","name":"unet_name","type":"COMBO","widget":{"name":"unet_name"},"link":null},{"localized_name":"weight_dtype","name":"weight_dtype","type":"COMBO","widget":{"name":"weight_dtype"},"link":null}],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[13]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"UNETLoader"},"widgets_values":["z_image_turbo_bf16.safetensors","default"]},{"id":14,"type":"SaveLoRA","pos":[1669.4327520909453,364.2853741723532],"size":[270,82],"flags":{},"order":9,"mode":0,"inputs":[{"localized_name":"lora","name":"lora","type":"LORA_MODEL","link":15},{"localized_name":"prefix","name":"prefix","type":"STRING","widget":{"name":"prefix"},"link":null},{"localized_name":"steps","name":"steps","shape":7,"type":"INT","widget":{"name":"steps"},"link":null}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"SaveLoRA"},"widgets_values":["loras/z-skill",100]},{"id":1,"type":"TrainLoraNode","pos":[1117.7665637046055,228.49712230917353],"size":[300.751953125,430],"flags":{},"order":7,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":13},{"localized_name":"latents","name":"latents","type":"LATENT","link":22},{"localized_name":"positive","name":"positive","type":"CONDITIONING","link":23},{"localized_name":"batch_size","name":"batch_size","type":"INT","widget":{"name":"batch_size"},"link":null},{"localized_name":"grad_accumulation_steps","name":"grad_accumulation_steps","type":"INT","widget":{"name":"grad_accumulation_steps"},"link":null},{"localized_name":"steps","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"learning_rate","name":"learning_rate","type":"FLOAT","widget":{"name":"learning_rate"},"link":null},{"localized_name":"rank","name":"rank","type":"INT","widget":{"name":"rank"},"link":null},{"localized_name":"optimizer","name":"optimizer","type":"COMBO","widget":{"name":"optimizer"},"link":null},{"localized_name":"loss_function","name":"loss_function","type":"COMBO","widget":{"name":"loss_function"},"link":null},{"localized_name":"seed","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"training_dtype","name":"training_dtype","type":"COMBO","widget":{"name":"training_dtype"},"link":null},{"localized_name":"lora_dtype","name":"lora_dtype","type":"COMBO","widget":{"name":"lora_dtype"},"link":null},{"localized_name":"algorithm","name":"algorithm","type":"COMBO","widget":{"name":"algorithm"},"link":null},{"localized_name":"gradient_checkpointing","name":"gradient_checkpointing","type":"BOOLEAN","widget":{"name":"gradient_checkpointing"},"link":null},{"localized_name":"existing_lora","name":"existing_lora","type":"COMBO","widget":{"name":"existing_lora"},"link":null}],"outputs":[{"localized_name":"model_with_lora","name":"model","type":"MODEL","links":[]},{"localized_name":"lora","name":"lora","type":"LORA_MODEL","links":[15]},{"localized_name":"loss","name":"loss_map","type":"LOSS_MAP","links":null},{"localized_name":"steps","name":"steps","type":"INT","links":null}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"TrainLoraNode"},"widgets_values":[1,1,100,0.0005,8,"RMSprop","MSE",455842,"fixed","bf16","bf16","LoRA",false,"[None]"]},{"id":15,"type":"UNetSave","pos":[1739.667288470156,186.62354802458594],"size":[270,58],"flags":{},"order":8,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":16}],"outputs":[],"properties":{"Node name for S&R":"UNetSave","cnr_id":"RES4LYF","ver":"46de917234f9fef3f2ab411c41e07aa3c633f4f7"},"widgets_values":["models/z-skill"]},{"id":18,"type":"LoadTrainingDataset","pos":[693.7690495018734,242.6567822354766],"size":[270,78],"flags":{},"order":5,"mode":0,"inputs":[{"localized_name":"folder_name","name":"folder_name","type":"STRING","widget":{"name":"folder_name"},"link":null}],"outputs":[{"localized_name":"latents","name":"latents","shape":6,"type":"LATENT","links":[22]},{"localized_name":"conditioning","name":"conditioning","shape":6,"type":"CONDITIONING","links":[23]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.75","Node name for S&R":"LoadTrainingDataset"},"widgets_values":["z-skill"]}],"links":[[1,2,0,3,0,"LATENT"],[2,2,1,3,1,"CONDITIONING"],[6,5,0,2,1,"VAE"],[11,9,0,11,0,"IMAGE"],[12,11,0,2,0,"IMAGE"],[13,12,0,1,0,"MODEL"],[15,1,1,14,0,"LORA_MODEL"],[18,16,0,2,2,"CLIP"],[22,18,0,1,1,"LATENT"],[23,18,1,1,2,"CONDITIONING"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8390545288824437,"offset":[100.56885632087726,472.9753010430394]},"workflowRendererVersion":"LG"},"version":0.4}
## Additional Context
(Please add any additional context or steps to reproduce the error here)
Other
No response
I can confirm this bug on my side
@KohakuBlueleaf any chance you can help here? I tried moving tensors to both devices but couldn't resolve the problem
@KohakuBlueleaf any chance you can help here? I tried moving tensors to both devices but couldn't resolve the problem
My next update for resolution bucket will include this fix, sry for inconvenience there.
Same error on latest 0.3.76. Is lora training only for nvidia cards?
Same error on latest 0.3.76. Is lora training only for nvidia cards?
I don't think so, the issue isn't related to the gpu manufacturer afaik. No fix has been pushed yet though.
I can now only get it to train on SD1.5 after regular clearing of model and execution cache. SDXL no longer works.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
And it no longer can do large steps values above ~160. Before I could do 2000. torch.OutOfMemoryError: Allocation on device
ComfUI v 0.3.76
@KohakuBlueleaf any chance you can help here? I tried moving tensors to both devices but couldn't resolve the problem
My next update for resolution bucket will include this fix, sry for inconvenience there.
When next update ? I still have the same problem ,Thanks
A PR which contain fix for this problem have been opened
https://github.com/comfyanonymous/ComfyUI/pull/11117
still broken in 0.4.0
Clean install of ComfyUI_windows_portable_nvidia.7z
The last time this worked with SDXL base was 0.3.68
still broken in 0.4.0 Clean install of
ComfyUI_windows_portable_nvidia.7zThe last time this worked with SDXL base was 0.3.68
![]()
Bcuz the pr never been merged You can try that pr branch directly
You can try that pr branch directly
Cool, I tried it an it appeared to work so far.
If anyone wants to try it,
open command prompt into your .\ComfyUI_windows_portable\ComfyUI\ folder
git fetch origin pull/11117/head:resolution-bucket
git switch -f resolution-bucket
You can try that pr branch directly
Cool, I tried it an it appeared to work so far.
If anyone wants to try it, open command prompt into your
.\ComfyUI_windows_portable\ComfyUI\foldergit fetch origin pull/11117/head:resolution-bucket git switch -f resolution-bucket
Hi, fairly new here. I'm having this issue ever since updating .4. How would I go about installing this on the non-portable version?
You can try that pr branch directly
Cool, I tried it an it appeared to work so far. If anyone wants to try it, open command prompt into your
.\ComfyUI_windows_portable\ComfyUI\folder git fetch origin pull/11117/head:resolution-bucket git switch -f resolution-bucketHi, fairly new here. I'm having this issue ever since updating .4. How would I go about installing this on the non-portable version?
for non-portable version the command should be same, you can ask AI about it "how to switch to a content of a PR in git" + "how to back to main branch "(in this case you should provide git status result to AI)