ComfyUI-LTXVideo icon indicating copy to clipboard operation
ComfyUI-LTXVideo copied to clipboard

LTX-2 - Still getting mat1 and mat2 shapes cannot be multiplied (1024x62208 and 188160x3840) even with live preview off.

Open ThePowerOfMonkeys opened this issue 1 month ago • 3 comments

Hi guys, so i've been trying out the LTX-2 workflow on a reasonably fresh install. Python 3.12, pytorch 2.9.0 with cuda 12.8 on an RTX Pro 6000 Blackwell.

I can run the default comfy workflow for LTX-2 which generates clips pretty damn fast, which is awesome! However when I try to use the LTXVideo workflows - which I'd like to do given they're the ones from the team that built this model, I get this error.. consistently

RuntimeError: mat1 and mat2 shapes cannot be multiplied (1024x62208 and 188160x3840)

I've tried about 3-4 different gemma-3-12b models, all seem to work on the comfy workflow, but have real issues with the LTX T2V flows.

I've disabled live preview, on command line and ensured its set to 'none' on the settings panel too..

Any thoughts?

Startup log and all that attached :


Launching ComfyUI from: /home/ace/comfy-dev/ltx-2-test

Adding extra search path checkpoints /home/ace/comfy-dev/_shared_models/checkpoints
Adding extra search path text_encoders /home/ace/comfy-dev/_shared_models/text_encoders
Adding extra search path text_encoders /home/ace/comfy-dev/_shared_models/clip/
Adding extra search path clip /home/ace/comfy-dev/_shared_models/clip
Adding extra search path clip_vision /home/ace/comfy-dev/_shared_models/clip_vision
Adding extra search path configs /home/ace/comfy-dev/_shared_models/configs
Adding extra search path controlnet /home/ace/comfy-dev/_shared_models/controlnet
Adding extra search path diffusion_models /home/ace/comfy-dev/_shared_models/diffusion_models
Adding extra search path diffusion_models /home/ace/comfy-dev/_shared_models/unet
Adding extra search path embeddings /home/ace/comfy-dev/_shared_models/embeddings
Adding extra search path loras /home/ace/comfy-dev/_shared_models/loras
Adding extra search path upscale_models /home/ace/comfy-dev/_shared_models/upscale_models
Adding extra search path latent_upscale_models /home/ace/comfy-dev/_shared_models/latent_upscale_models
Adding extra search path vae /home/ace/comfy-dev/_shared_models/vae
Adding extra search path TTS /home/ace/comfy-dev/_shared_models/tts
Adding extra search path voices /home/ace/comfy-dev/_shared_models/tts/voices
[START] Security scan
[ComfyUI-Manager] Using `uv` as Python module for pip operations.
Using Python 3.12.12 environment at: /home/ace/miniconda3/envs/comfyenv_LTX-2
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2026-01-07 05:04:29.290
** Platform: Linux
** Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0]
** Python executable: /home/ace/miniconda3/envs/comfyenv_LTX-2/bin/python3.12
** ComfyUI Path: /home/ace/comfy-dev/ltx-2-test
** ComfyUI Base Folder Path: /home/ace/comfy-dev/ltx-2-test
** User directory: /home/ace/comfy-dev/ltx-2-test/user
** ComfyUI-Manager config path: /home/ace/comfy-dev/ltx-2-test/user/__manager/config.ini
** Log path: /home/ace/comfy-dev/ltx-2-test/user/comfyui.log
Using Python 3.12.12 environment at: /home/ace/miniconda3/envs/comfyenv_LTX-2
Using Python 3.12.12 environment at: /home/ace/miniconda3/envs/comfyenv_LTX-2

Prestartup times for custom nodes:
   0.2 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyUI-Manager

Checkpoint files will always be loaded safely.
Total VRAM 97248 MB, total RAM 94169 MB
pytorch version: 2.9.0+cu129
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA RTX PRO 6000 Blackwell Workstation Edition : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 89460.0
working around nvidia conv3d memory bug.
Found comfy_kitchen backend cuda: {'available': False, 'disabled': False, 'unavailable_reason': 'libcublasLt.so.13: cannot open shared object file: No such file or directory', 'capabilities': []}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Using sage attention
Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0]
ComfyUI version: 0.7.0
ComfyUI frontend version: 1.35.9
[Prompt Server] web root: /home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/comfyui_frontend_package/static
Total VRAM 97248 MB, total RAM 94169 MB
pytorch version: 2.9.0+cu129
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA RTX PRO 6000 Blackwell Workstation Edition : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 89460.0
Unable to parse pyproject.toml due to lack dependency pydantic-settings, please run 'pip install -r requirements.txt': Expected '=' after a key in a key/value pair (at line 39, column 10)

----------------------------------------------------------------------
[ComfyUI-Manager] NOTICE: Legacy backup exists
  - Your old Manager data was backed up to:
      /home/ace/comfy-dev/ltx-2-test/user/__manager/.legacy-manager-backup
  - Please verify and remove it when no longer needed.
----------------------------------------------------------------------

### Loading: ComfyUI-Manager (V3.39)
[ComfyUI-Manager] network_mode: private
[ComfyUI-Manager] ComfyUI per-queue preview override detected (PR #11261). Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.
### ComfyUI Version: v0.7.0-22-ge14f3b66 | Released on '2026-01-05'
Using sage attention
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] The private comfyregistry is not yet supported in `network_mode=private`.
[ComfyUI-Manager] All startup tasks have been completed.
(RES4LYF) Init
(RES4LYF) Importing beta samplers.
(RES4LYF) Importing legacy samplers.

Import times for custom nodes:
   0.0 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/websocket_image_save.py
   0.0 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyMath
   0.0 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/comfyui_essentials
   0.0 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/comfyui-videohelpersuite
   0.0 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/comfyui-kjnodes
   0.0 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyUI-Manager
   0.1 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyUI-LTXVideo
   0.2 seconds: /home/ace/comfy-dev/ltx-2-test/custom_nodes/RES4LYF

Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://0.0.0.0:9000
got prompt
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.50, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loaded processor from /home/ace/comfy-dev/_shared_models/text_encoders - enhancement enabled
Some weights of Gemma3ForConditionalGeneration were not initialized from the model checkpoint at /home/ace/comfy-dev/_shared_models/text_encoders and are newly initialized: ['language_model.lm_head.weight', 'language_model.model.embed_tokens.weight', 'language_model.model.layers.0.input_layernorm.weight', 'language_model.model.layers.0.mlp.down_proj.weight', 'language_model.model.layers.0.mlp.gate_proj.weight', 'language_model.model.layers.0.mlp.up_proj.weight', 'language_model.model.layers.0.post_attention_layernorm.weight', 'language_model.model.layers.0.post_feedforward_layernorm.weight', 'language_model.model.layers.0.pre_feedforward_layernorm.weight', 'language_model.model.layers.0.self_attn.k_norm.weight', 'language_model.model.layers.0.self_attn.k_proj.weight', 'language_model.model.layers.0.self_attn.o_proj.weight', 'language_model.model.layers.0.self_attn.q_norm.weight', 'language_model.model.layers.0.self_attn.q_proj.weight', 'language_model.model.layers.0.self_attn.v_proj.weight', 'language_model.model.layers.1.input_layernorm.weight', 'language_model.model.layers.1.mlp.down_proj.weight', 'language_model.model.layers.1.mlp.gate_proj.weight', 'language_model.model.layers.1.mlp.up_proj.weight', 'language_model.model.layers.1.post_attention_layernorm.weight', 'language_model.model.layers.1.post_feedforward_layernorm.weight', 'language_model.model.layers.1.pre_feedforward_layernorm.weight', 'language_model.model.layers.1.self_attn.k_norm.weight', 'language_model.model.layers.1.self_attn.k_proj.weight', 'language_model.model.layers.1.self_attn.o_proj.weight', 'language_model.model.layers.1.self_attn.q_norm.weight', 'language_model.model.layers.1.self_attn.q_proj.weight', 'language_model.model.layers.1.self_attn.v_proj.weight', 'language_model.model.layers.10.input_layernorm.weight', 'language_model.model.layers.10.mlp.down_proj.weight', 'language_model.model.layers.10.mlp.gate_proj.weight', 'language_model.model.layers.10.mlp.up_proj.weight', 'language_model.model.layers.10.post_attention_layernorm.weight', 'language_model.model.layers.10.post_feedforward_layernorm.weight', 'language_model.model.layers.10.pre_feedforward_layernorm.weight', 'language_model.model.layers.10.self_attn.k_norm.weight', 'language_model.model.layers.10.self_attn.k_proj.weight', 'language_model.model.layers.10.self_attn.o_proj.weight', 'language_model.model.layers.10.self_attn.q_norm.weight', 'language_model.model.layers.10.self_attn.q_proj.weight', 'language_model.model.layers.10.self_attn.v_proj.weight', 'language_model.model.layers.11.input_layernorm.weight', 'language_model.model.layers.11.mlp.down_proj.weight', 'language_model.model.layers.11.mlp.gate_proj.weight', 'language_model.model.layers.11.mlp.up_proj.weight', 'language_model.model.layers.11.post_attention_layernorm.weight', 'language_model.model.layers.11.post_feedforward_layernorm.weight', 'language_model.model.layers.11.pre_feedforward_layernorm.weight', 'language_model.model.layers.11.self_attn.k_norm.weight', 'language_model.model.layers.11.self_attn.k_proj.weight', 'language_model.model.layers.11.self_attn.o_proj.weight', 'language_model.model.layers.11.self_attn.q_norm.weight', 'language_model.model.layers.11.self_attn.q_proj.weight', 'language_model.model.layers.11.self_attn.v_proj.weight', 'language_model.model.layers.12.input_layernorm.weight', 'language_model.model.layers.12.mlp.down_proj.weight', 'language_model.model.layers.12.mlp.gate_proj.weight', 'language_model.model.layers.12.mlp.up_proj.weight', 'language_model.model.layers.12.post_attention_layernorm.weight', 'language_model.model.layers.12.post_feedforward_layernorm.weight', 'language_model.model.layers.12.pre_feedforward_layernorm.weight', 'language_model.model.layers.12.self_attn.k_norm.weight', 'language_model.model.layers.12.self_attn.k_proj.weight', 'language_model.model.layers.12.self_attn.o_proj.weight', 'language_model.model.layers.12.self_attn.q_norm.weight', 'language_model.model.layers.12.self_attn.q_proj.weight', 'language_model.model.layers.12.self_attn.v_proj.weight', 'language_model.model.layers.13.input_layernorm.weight', 'language_model.model.layers.13.mlp.down_proj.weight', 'language_model.model.layers.13.mlp.gate_proj.weight', 'language_model.model.layers.13.mlp.up_proj.weight', 'language_model.model.layers.13.post_attention_layernorm.weight', 'language_model.model.layers.13.post_feedforward_layernorm.weight', 'language_model.model.layers.13.pre_feedforward_layernorm.weight', 'language_model.model.layers.13.self_attn.k_norm.weight', 'language_model.model.layers.13.self_attn.k_proj.weight', 'language_model.model.layers.13.self_attn.o_proj.weight', 'language_model.model.layers.13.self_attn.q_norm.weight', 'language_model.model.layers.13.self_attn.q_proj.weight', 'language_model.model.layers.13.self_attn.v_proj.weight', 'language_model.model.layers.14.input_layernorm.weight', 'language_model.model.layers.14.mlp.down_proj.weight', 'language_model.model.layers.14.mlp.gate_proj.weight', 'language_model.model.layers.14.mlp.up_proj.weight', 'language_model.model.layers.14.post_attention_layernorm.weight', 'language_model.model.layers.14.post_feedforward_layernorm.weight', 'language_model.model.layers.14.pre_feedforward_layernorm.weight', 'language_model.model.layers.14.self_attn.k_norm.weight', 'language_model.model.layers.14.self_attn.k_proj.weight', 'language_model.model.layers.14.self_attn.o_proj.weight', 'language_model.model.layers.14.self_attn.q_norm.weight', 'language_model.model.layers.14.self_attn.q_proj.weight', 'language_model.model.layers.14.self_attn.v_proj.weight', 'language_model.model.layers.15.input_layernorm.weight', 'language_model.model.layers.15.mlp.down_proj.weight', 'language_model.model.layers.15.mlp.gate_proj.weight', 'language_model.model.layers.15.mlp.up_proj.weight', 'language_model.model.layers.15.post_attention_layernorm.weight', 'language_model.model.layers.15.post_feedforward_layernorm.weight', 'language_model.model.layers.15.pre_feedforward_layernorm.weight', 'language_model.model.layers.15.self_attn.k_norm.weight', 'language_model.model.layers.15.self_attn.k_proj.weight', 'language_model.model.layers.15.self_attn.o_proj.weight', 'language_model.model.layers.15.self_attn.q_norm.weight', 'language_model.model.layers.15.self_attn.q_proj.weight', 'language_model.model.layers.15.self_attn.v_proj.weight', 'language_model.model.layers.16.input_layernorm.weight', 'language_model.model.layers.16.mlp.down_proj.weight', 'language_model.model.layers.16.mlp.gate_proj.weight', 'language_model.model.layers.16.mlp.up_proj.weight', 'language_model.model.layers.16.post_attention_layernorm.weight', 'language_model.model.layers.16.post_feedforward_layernorm.weight', 'language_model.model.layers.16.pre_feedforward_layernorm.weight', 'language_model.model.layers.16.self_attn.k_norm.weight', 'language_model.model.layers.16.self_attn.k_proj.weight', 'language_model.model.layers.16.self_attn.o_proj.weight', 'language_model.model.layers.16.self_attn.q_norm.weight', 'language_model.model.layers.16.self_attn.q_proj.weight', 'language_model.model.layers.16.self_attn.v_proj.weight', 'language_model.model.layers.17.input_layernorm.weight', 'language_model.model.layers.17.mlp.down_proj.weight', 'language_model.model.layers.17.mlp.gate_proj.weight', 'language_model.model.layers.17.mlp.up_proj.weight', 'language_model.model.layers.17.post_attention_layernorm.weight', 'language_model.model.layers.17.post_feedforward_layernorm.weight', 'language_model.model.layers.17.pre_feedforward_layernorm.weight', 'language_model.model.layers.17.self_attn.k_norm.weight', 'language_model.model.layers.17.self_attn.k_proj.weight', 'language_model.model.layers.17.self_attn.o_proj.weight', 'language_model.model.layers.17.self_attn.q_norm.weight', 'language_model.model.layers.17.self_attn.q_proj.weight', 'language_model.model.layers.17.self_attn.v_proj.weight', 'language_model.model.layers.18.input_layernorm.weight', 'language_model.model.layers.18.mlp.down_proj.weight', 'language_model.model.layers.18.mlp.gate_proj.weight', 'language_model.model.layers.18.mlp.up_proj.weight', 'language_model.model.layers.18.post_attention_layernorm.weight', 'language_model.model.layers.18.post_feedforward_layernorm.weight', 'language_model.model.layers.18.pre_feedforward_layernorm.weight', 'language_model.model.layers.18.self_attn.k_norm.weight', 'language_model.model.layers.18.self_attn.k_proj.weight', 'language_model.model.layers.18.self_attn.o_proj.weight', 'language_model.model.layers.18.self_attn.q_norm.weight', 'language_model.model.layers.18.self_attn.q_proj.weight', 'language_model.model.layers.18.self_attn.v_proj.weight', 'language_model.model.layers.19.input_layernorm.weight', 'language_model.model.layers.19.mlp.down_proj.weight', 'language_model.model.layers.19.mlp.gate_proj.weight', 'language_model.model.layers.19.mlp.up_proj.weight', 'language_model.model.layers.19.post_attention_layernorm.weight', 'language_model.model.layers.19.post_feedforward_layernorm.weight', 'language_model.model.layers.19.pre_feedforward_layernorm.weight', 'language_model.model.layers.19.self_attn.k_norm.weight', 'language_model.model.layers.19.self_attn.k_proj.weight', 'language_model.model.layers.19.self_attn.o_proj.weight', 'language_model.model.layers.19.self_attn.q_norm.weight', 'language_model.model.layers.19.self_attn.q_proj.weight', 'language_model.model.layers.19.self_attn.v_proj.weight', 'language_model.model.layers.2.input_layernorm.weight', 'language_model.model.layers.2.mlp.down_proj.weight', 'language_model.model.layers.2.mlp.gate_proj.weight', 'language_model.model.layers.2.mlp.up_proj.weight', 'language_model.model.layers.2.post_attention_layernorm.weight', 'language_model.model.layers.2.post_feedforward_layernorm.weight', 'language_model.model.layers.2.pre_feedforward_layernorm.weight', 'language_model.model.layers.2.self_attn.k_norm.weight', 'language_model.model.layers.2.self_attn.k_proj.weight', 'language_model.model.layers.2.self_attn.o_proj.weight', 'language_model.model.layers.2.self_attn.q_norm.weight', 'language_model.model.layers.2.self_attn.q_proj.weight', 'language_model.model.layers.2.self_attn.v_proj.weight', 'language_model.model.layers.20.input_layernorm.weight', 'language_model.model.layers.20.mlp.down_proj.weight', 'language_model.model.layers.20.mlp.gate_proj.weight', 'language_model.model.layers.20.mlp.up_proj.weight', 'language_model.model.layers.20.post_attention_layernorm.weight', 'language_model.model.layers.20.post_feedforward_layernorm.weight', 'language_model.model.layers.20.pre_feedforward_layernorm.weight', 'language_model.model.layers.20.self_attn.k_norm.weight', 'language_model.model.layers.20.self_attn.k_proj.weight', 'language_model.model.layers.20.self_attn.o_proj.weight', 'language_model.model.layers.20.self_attn.q_norm.weight', 'language_model.model.layers.20.self_attn.q_proj.weight', 'language_model.model.layers.20.self_attn.v_proj.weight', 'language_model.model.layers.21.input_layernorm.weight', 'language_model.model.layers.21.mlp.down_proj.weight', 'language_model.model.layers.21.mlp.gate_proj.weight', 'language_model.model.layers.21.mlp.up_proj.weight', 'language_model.model.layers.21.post_attention_layernorm.weight', 'language_model.model.layers.21.post_feedforward_layernorm.weight', 'language_model.model.layers.21.pre_feedforward_layernorm.weight', 'language_model.model.layers.21.self_attn.k_norm.weight', 'language_model.model.layers.21.self_attn.k_proj.weight', 'language_model.model.layers.21.self_attn.o_proj.weight', 'language_model.model.layers.21.self_attn.q_norm.weight', 'language_model.model.layers.21.self_attn.q_proj.weight', 'language_model.model.layers.21.self_attn.v_proj.weight', 'language_model.model.layers.22.input_layernorm.weight', 'language_model.model.layers.22.mlp.down_proj.weight', 'language_model.model.layers.22.mlp.gate_proj.weight', 'language_model.model.layers.22.mlp.up_proj.weight', 'language_model.model.layers.22.post_attention_layernorm.weight', 'language_model.model.layers.22.post_feedforward_layernorm.weight', 'language_model.model.layers.22.pre_feedforward_layernorm.weight', 'language_model.model.layers.22.self_attn.k_norm.weight', 'language_model.model.layers.22.self_attn.k_proj.weight', 'language_model.model.layers.22.self_attn.o_proj.weight', 'language_model.model.layers.22.self_attn.q_norm.weight', 'language_model.model.layers.22.self_attn.q_proj.weight', 'language_model.model.layers.22.self_attn.v_proj.weight', 'language_model.model.layers.23.input_layernorm.weight', 'language_model.model.layers.23.mlp.down_proj.weight', 'language_model.model.layers.23.mlp.gate_proj.weight', 'language_model.model.layers.23.mlp.up_proj.weight', 'language_model.model.layers.23.post_attention_layernorm.weight', 'language_model.model.layers.23.post_feedforward_layernorm.weight', 'language_model.model.layers.23.pre_feedforward_layernorm.weight', 'language_model.model.layers.23.self_attn.k_norm.weight', 'language_model.model.layers.23.self_attn.k_proj.weight', 'language_model.model.layers.23.self_attn.o_proj.weight', 'language_model.model.layers.23.self_attn.q_norm.weight', 'language_model.model.layers.23.self_attn.q_proj.weight', 'language_model.model.layers.23.self_attn.v_proj.weight', 'language_model.model.layers.24.input_layernorm.weight', 'language_model.model.layers.24.mlp.down_proj.weight', 'language_model.model.layers.24.mlp.gate_proj.weight', 'language_model.model.layers.24.mlp.up_proj.weight', 'language_model.model.layers.24.post_attention_layernorm.weight', 'language_model.model.layers.24.post_feedforward_layernorm.weight', 'language_model.model.layers.24.pre_feedforward_layernorm.weight', 'language_model.model.layers.24.self_attn.k_norm.weight', 'language_model.model.layers.24.self_attn.k_proj.weight', 'language_model.model.layers.24.self_attn.o_proj.weight', 'language_model.model.layers.24.self_attn.q_norm.weight', 'language_model.model.layers.24.self_attn.q_proj.weight', 'language_model.model.layers.24.self_attn.v_proj.weight', 'language_model.model.layers.25.input_layernorm.weight', 'language_model.model.layers.25.mlp.down_proj.weight', 'language_model.model.layers.25.mlp.gate_proj.weight', 'language_model.model.layers.25.mlp.up_proj.weight', 'language_model.model.layers.25.post_attention_layernorm.weight', 'language_model.model.layers.25.post_feedforward_layernorm.weight', 'language_model.model.layers.25.pre_feedforward_layernorm.weight', 'language_model.model.layers.25.self_attn.k_norm.weight', 'language_model.model.layers.25.self_attn.k_proj.weight', 'language_model.model.layers.25.self_attn.o_proj.weight', 'language_model.model.layers.25.self_attn.q_norm.weight', 'language_model.model.layers.25.self_attn.q_proj.weight', 'language_model.model.layers.25.self_attn.v_proj.weight', 'language_model.model.layers.3.input_layernorm.weight', 'language_model.model.layers.3.mlp.down_proj.weight', 'language_model.model.layers.3.mlp.gate_proj.weight', 'language_model.model.layers.3.mlp.up_proj.weight', 'language_model.model.layers.3.post_attention_layernorm.weight', 'language_model.model.layers.3.post_feedforward_layernorm.weight', 'language_model.model.layers.3.pre_feedforward_layernorm.weight', 'language_model.model.layers.3.self_attn.k_norm.weight', 'language_model.model.layers.3.self_attn.k_proj.weight', 'language_model.model.layers.3.self_attn.o_proj.weight', 'language_model.model.layers.3.self_attn.q_norm.weight', 'language_model.model.layers.3.self_attn.q_proj.weight', 'language_model.model.layers.3.self_attn.v_proj.weight', 'language_model.model.layers.4.input_layernorm.weight', 'language_model.model.layers.4.mlp.down_proj.weight', 'language_model.model.layers.4.mlp.gate_proj.weight', 'language_model.model.layers.4.mlp.up_proj.weight', 'language_model.model.layers.4.post_attention_layernorm.weight', 'language_model.model.layers.4.post_feedforward_layernorm.weight', 'language_model.model.layers.4.pre_feedforward_layernorm.weight', 'language_model.model.layers.4.self_attn.k_norm.weight', 'language_model.model.layers.4.self_attn.k_proj.weight', 'language_model.model.layers.4.self_attn.o_proj.weight', 'language_model.model.layers.4.self_attn.q_norm.weight', 'language_model.model.layers.4.self_attn.q_proj.weight', 'language_model.model.layers.4.self_attn.v_proj.weight', 'language_model.model.layers.5.input_layernorm.weight', 'language_model.model.layers.5.mlp.down_proj.weight', 'language_model.model.layers.5.mlp.gate_proj.weight', 'language_model.model.layers.5.mlp.up_proj.weight', 'language_model.model.layers.5.post_attention_layernorm.weight', 'language_model.model.layers.5.post_feedforward_layernorm.weight', 'language_model.model.layers.5.pre_feedforward_layernorm.weight', 'language_model.model.layers.5.self_attn.k_norm.weight', 'language_model.model.layers.5.self_attn.k_proj.weight', 'language_model.model.layers.5.self_attn.o_proj.weight', 'language_model.model.layers.5.self_attn.q_norm.weight', 'language_model.model.layers.5.self_attn.q_proj.weight', 'language_model.model.layers.5.self_attn.v_proj.weight', 'language_model.model.layers.6.input_layernorm.weight', 'language_model.model.layers.6.mlp.down_proj.weight', 'language_model.model.layers.6.mlp.gate_proj.weight', 'language_model.model.layers.6.mlp.up_proj.weight', 'language_model.model.layers.6.post_attention_layernorm.weight', 'language_model.model.layers.6.post_feedforward_layernorm.weight', 'language_model.model.layers.6.pre_feedforward_layernorm.weight', 'language_model.model.layers.6.self_attn.k_norm.weight', 'language_model.model.layers.6.self_attn.k_proj.weight', 'language_model.model.layers.6.self_attn.o_proj.weight', 'language_model.model.layers.6.self_attn.q_norm.weight', 'language_model.model.layers.6.self_attn.q_proj.weight', 'language_model.model.layers.6.self_attn.v_proj.weight', 'language_model.model.layers.7.input_layernorm.weight', 'language_model.model.layers.7.mlp.down_proj.weight', 'language_model.model.layers.7.mlp.gate_proj.weight', 'language_model.model.layers.7.mlp.up_proj.weight', 'language_model.model.layers.7.post_attention_layernorm.weight', 'language_model.model.layers.7.post_feedforward_layernorm.weight', 'language_model.model.layers.7.pre_feedforward_layernorm.weight', 'language_model.model.layers.7.self_attn.k_norm.weight', 'language_model.model.layers.7.self_attn.k_proj.weight', 'language_model.model.layers.7.self_attn.o_proj.weight', 'language_model.model.layers.7.self_attn.q_norm.weight', 'language_model.model.layers.7.self_attn.q_proj.weight', 'language_model.model.layers.7.self_attn.v_proj.weight', 'language_model.model.layers.8.input_layernorm.weight', 'language_model.model.layers.8.mlp.down_proj.weight', 'language_model.model.layers.8.mlp.gate_proj.weight', 'language_model.model.layers.8.mlp.up_proj.weight', 'language_model.model.layers.8.post_attention_layernorm.weight', 'language_model.model.layers.8.post_feedforward_layernorm.weight', 'language_model.model.layers.8.pre_feedforward_layernorm.weight', 'language_model.model.layers.8.self_attn.k_norm.weight', 'language_model.model.layers.8.self_attn.k_proj.weight', 'language_model.model.layers.8.self_attn.o_proj.weight', 'language_model.model.layers.8.self_attn.q_norm.weight', 'language_model.model.layers.8.self_attn.q_proj.weight', 'language_model.model.layers.8.self_attn.v_proj.weight', 'language_model.model.layers.9.input_layernorm.weight', 'language_model.model.layers.9.mlp.down_proj.weight', 'language_model.model.layers.9.mlp.gate_proj.weight', 'language_model.model.layers.9.mlp.up_proj.weight', 'language_model.model.layers.9.post_attention_layernorm.weight', 'language_model.model.layers.9.post_feedforward_layernorm.weight', 'language_model.model.layers.9.pre_feedforward_layernorm.weight', 'language_model.model.layers.9.self_attn.k_norm.weight', 'language_model.model.layers.9.self_attn.k_proj.weight', 'language_model.model.layers.9.self_attn.o_proj.weight', 'language_model.model.layers.9.self_attn.q_norm.weight', 'language_model.model.layers.9.self_attn.q_proj.weight', 'language_model.model.layers.9.self_attn.v_proj.weight', 'language_model.model.norm.weight', 'multi_modal_projector.mm_input_projection_weight', 'multi_modal_projector.mm_soft_emb_norm.weight', 'vision_tower.vision_model.embeddings.patch_embedding.bias', 'vision_tower.vision_model.embeddings.patch_embedding.weight', 'vision_tower.vision_model.embeddings.position_embedding.weight', 'vision_tower.vision_model.encoder.layers.0.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.0.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.0.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.0.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.1.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.1.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.1.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.1.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.10.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.10.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.10.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.10.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.11.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.11.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.11.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.11.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.2.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.2.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.2.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.2.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.3.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.3.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.3.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.3.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.4.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.4.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.4.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.4.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.5.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.5.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.5.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.5.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.6.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.6.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.6.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.6.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.7.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.7.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.7.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.7.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.8.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.8.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.8.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.8.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_tower.vision_model.encoder.layers.9.layer_norm1.bias', 'vision_tower.vision_model.encoder.layers.9.layer_norm1.weight', 'vision_tower.vision_model.encoder.layers.9.layer_norm2.bias', 'vision_tower.vision_model.encoder.layers.9.layer_norm2.weight', 'vision_tower.vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_tower.vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_tower.vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_tower.vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_tower.vision_model.head.attention.in_proj_bias', 'vision_tower.vision_model.head.attention.in_proj_weight', 'vision_tower.vision_model.head.attention.out_proj.bias', 'vision_tower.vision_model.head.attention.out_proj.weight', 'vision_tower.vision_model.head.layernorm.bias', 'vision_tower.vision_model.head.layernorm.weight', 'vision_tower.vision_model.head.mlp.fc1.bias', 'vision_tower.vision_model.head.mlp.fc1.weight', 'vision_tower.vision_model.head.mlp.fc2.bias', 'vision_tower.vision_model.head.mlp.fc2.weight', 'vision_tower.vision_model.head.probe', 'vision_tower.vision_model.post_layernorm.bias', 'vision_tower.vision_model.post_layernorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load _LTXVGemmaTextEncoderModel
loaded completely; 95455.99 MB usable, 9068.28 MB loaded, full load: True
Token indices sequence length is longer than the specified maximum sequence length for this model (1077 > 1024). Running this sequence through the model will result in indexing errors
!!! Exception during processing !!! mat1 and mat2 shapes cannot be multiplied (1024x62208 and 188160x3840)
Traceback (most recent call last):
  File "/home/ace/comfy-dev/ltx-2-test/execution.py", line 518, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/execution.py", line 303, in _async_map_node_over_list
    await process_inputs(input_dict, i)
  File "/home/ace/comfy-dev/ltx-2-test/execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/nodes.py", line 77, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/comfy/sd.py", line 207, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/comfy/sd.py", line 271, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyUI-LTXVideo/gemma_encoder.py", line 321, in encode_token_weights
    encoded_input = self(input_ids, attention_mask, padding_side="left")
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyUI-LTXVideo/gemma_encoder.py", line 304, in forward
    projected = self.feature_extractor_linear(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/comfy-dev/ltx-2-test/custom_nodes/ComfyUI-LTXVideo/gemma_encoder.py", line 189, in forward
    return self.aggregate_embed(x)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ace/miniconda3/envs/comfyenv_LTX-2/lib/python3.12/site-packages/torch/nn/modules/linear.py", line 134, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1024x62208 and 188160x3840)

Prompt executed in 42.54 seconds

Thanks in advance!

ThePowerOfMonkeys avatar Jan 07 '26 05:01 ThePowerOfMonkeys

我也遇到相同问题

Namm-star avatar Jan 07 '26 13:01 Namm-star

I was only able to get it to work with the un merged, unquantized version of the text encoder linked in the workflow.

Placing that inside my text_encoder folder with a folder name that the Gemma 3 model loader expected.

~/git/comfy/ComfyUI/models/text_encoders$ ls gemma-3-12b-it-qat-q4_0-unquantized -l
total 23842012
-rw-rw-r-- 1 cruser cruser         35 Jan  6 12:09 added_tokens.json
-rw-rw-r-- 1 cruser cruser       1615 Jan  6 12:09 chat_template.json
-rw-rw-r-- 1 cruser cruser       1611 Jan  6 12:09 config.json
-rw-rw-r-- 1 cruser cruser        173 Jan  6 12:09 generation_config.json
-rw-rw-r-- 1 cruser cruser 4979902192 Jan  6 12:09 model-00001-of-00005.safetensors
-rw-rw-r-- 1 cruser cruser 4931296592 Jan  6 12:09 model-00002-of-00005.safetensors
-rw-rw-r-- 1 cruser cruser 4931296656 Jan  6 12:09 model-00003-of-00005.safetensors
-rw-rw-r-- 1 cruser cruser 4931296656 Jan  6 12:09 model-00004-of-00005.safetensors
-rw-rw-r-- 1 cruser cruser 4601000928 Jan  6 12:09 model-00005-of-00005.safetensors
-rw-rw-r-- 1 cruser cruser     108605 Jan  6 12:09 model.safetensors.index.json
-rw-rw-r-- 1 cruser cruser        570 Jan  6 12:09 preprocessor_config.json
-rw-rw-r-- 1 cruser cruser         70 Jan  6 12:09 processor_config.json
-rw-rw-r-- 1 cruser cruser      22784 Jan  6 12:09 README.md
-rw-rw-r-- 1 cruser cruser        662 Jan  6 12:09 special_tokens_map.json
-rw-rw-r-- 1 cruser cruser    1157001 Jan  6 12:09 tokenizer_config.json
-rw-rw-r-- 1 cruser cruser   33384570 Jan  6 12:09 tokenizer.json
-rw-rw-r-- 1 cruser cruser    4689074 Jan  6 12:09 tokenizer.model

If i tried to merge them together or use an other text encoder i was getting the mats errors you got.

Image

crosson avatar Jan 07 '26 19:01 crosson

I also get the same error !

ComfyUI Error Report

Error Details

  • Node ID: 5226
  • Node Type: CLIPTextEncode
  • Exception Type: RuntimeError
  • Exception Message: mat1 and mat2 shapes cannot be multiplied (1024x62208 and 188160x3840)

Stack Trace

  File "C:\Users\User\Documents\comfy\ComfyUI\execution.py", line 518, in execute
    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                                                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\User\Documents\comfy\ComfyUI\execution.py", line 329, in get_output_data
    return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\User\Documents\comfy\ComfyUI\execution.py", line 303, in _async_map_node_over_list
    await process_inputs(input_dict, i)

  File "C:\Users\User\Documents\comfy\ComfyUI\execution.py", line 291, in process_inputs
    result = f(**inputs)
             ^^^^^^^^^^^

  File "C:\Users\User\Documents\comfy\ComfyUI\nodes.py", line 77, in encode

honeynadger007 avatar Jan 07 '26 20:01 honeynadger007

you should use the gemma model from here with our flows, we are working on allowing prompt enhancer to work with comfy's gemma loader so it won't be needed.

michaellightricks avatar Jan 08 '26 17:01 michaellightricks

you should use the gemma model from here with our flows, we are working on allowing prompt enhancer to work with comfy's gemma loader so it won't be needed.

That's the model I used ( cloned entire directory and loaded 'model-00001-of-00005.safetensors' in comfyui) and that's the one gives error ! Btw the worlflow I used is the one LTX prpvided

honeynadger007 avatar Jan 08 '26 18:01 honeynadger007

use this instead

https://huggingface.co/GitMylo/LTX-2-comfy_gemma_fp8_e4m3fn/blob/main/gemma_3_12B_it_fp8_e4m3fn.safetensors

but also instead of the LTX Gemma 3 Model Loader, use the LTX Audio Text Encoder Loader node to load the model, and bypass/disable the enhancer node, it will fix the issue.

downside, you cannot use the enhancer.

extra2AB avatar Jan 09 '26 16:01 extra2AB

Is there any way to run this WITH the enhancer? its one of the main reasons I watned to use the LTX workflow vs the ComfyUI supplied workflow.

For reference:

  1. I'm using the sharded gemma linked models (just did a git pull on the HF repo and its up-to-date)
  2. I'm using the LTX-Video workflow (again just refreshed the repo via comfyui-manager)

Happy to provide screenshots etc. if that helps..

The core issue I think is that it states this :

Some weights of Gemma3ForConditionalGeneration were not initialized from the model checkpoint at /home/ace/comfy-dev/_shared_models/text_encoders and are newly initialized:

Followed by a dump of a lot of layer names etc. and then the mat1 mat2 error..

Its like the model being linked isn't being fully read in... the ONLY thing i can think of is that i've got the sharded models as a sub folder in my text_encoders folder, which is mapped via the extra_model_paths.yaml file?

ThePowerOfMonkeys avatar Jan 10 '26 05:01 ThePowerOfMonkeys

Right, well F*** My Life... for the LTX-Video workflow, it looks like it's the fact that in my configuration the gemma 3 model is being read via the extra_models_path.yaml file. i.e. a shared models path.. basically it doesn't load all the model shards

I've just moved the gemma-3-12b-it-qat-q4_0-unquantized folder into my comfyui/models/text_encoders folder LOCALLY instead of using the _shared_models folder structure i have for everything else.

Straight away it said "Loading checkpoint shards" .. and took all 5 files in happily.

Not sure where this bug sits, but it would appear that the Gemma 3 model loader doesn't like having sharded files accessed via an extra_model_paths.yaml file.

ThePowerOfMonkeys avatar Jan 10 '26 05:01 ThePowerOfMonkeys