ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

[Bug]: RuntimeError in CLIPTextEncode

Open Gamebropro opened this issue 1 year ago • 1 comments

ComfyUI Error Report

Error Details

  • Node Type: CLIPTextEncode
  • Exception Type: RuntimeError
  • Exception Message: size mismatch, got input (77), mat (77x768), vec (1)

Stack Trace

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\nodes.py", line 65, in encode
    output = clip.encode_from_tokens(tokens, return_pooled=True, return_dict=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 126, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\flux.py", line 58, in encode_token_weights
    l_out, l_pooled = self.clip_l.encode_token_weights(token_weight_pairs_l)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 41, in encode_token_weights
    o = self.encode(to_encode)
        ^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 229, in encode
    return self(tokens)
           ^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 201, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 136, in forward
    x = self.text_model(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 112, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 69, in forward
    x = l(x, mask, optimized_attention)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 50, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
        ^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\SD WEB UI\api\NEW COMFY\ComfyUI_windows_portable_nvidia0.1.2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\ops.py", line 79, in forward
    x = self._forward_operation(x, weight, bias)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • OS: nt
  • Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
  • Embedded Python: true

Devices

  • Name: cuda:0 NVIDIA GeForce GTX 1650 SUPER : native
    • Type: cuda
    • VRAM Total: 4294508544
    • VRAM Free: 3439735296
    • Torch VRAM Total: 2097152
    • Torch VRAM Free: 1978880

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":12,"last_link_id":13,"nodes":[{"id":8,"type":"VAEDecode","pos":[1209,188],"size":{"0":210,"1":46},"flags":{},"order":7,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7},{"name":"vae","type":"VAE","link":13}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"}},{"id":9,"type":"SaveImage","pos":[1451,189],"size":{"0":390.59307861328125,"1":295.9659729003906},"flags":{},"order":8,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI"]},{"id":7,"type":"CLIPTextEncode","pos":[340,395],"size":{"0":427.3594970703125,"1":76},"flags":{},"order":5,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":12}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":10,"type":"UnetLoaderGGUF","pos":[450,60],"size":{"0":315,"1":58},"flags":{},"order":0,"mode":0,"outputs":[{"name":"MODEL","type":"MODEL","links":[10],"shape":3}],"properties":{"Node name for S&R":"UnetLoaderGGUF"},"widgets_values":["flux1-dev-Q2_K.gguf"]},{"id":3,"type":"KSampler","pos":[863,186],"size":{"0":315,"1":262},"flags":{},"order":6,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":10},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[303629768447431,"randomize",10,1,"euler","normal",1]},{"id":12,"type":"VAELoader","pos":[862,504],"size":{"0":315,"1":58},"flags":{},"order":1,"mode":0,"outputs":[{"name":"VAE","type":"VAE","links":[13],"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["taef1"]},{"id":11,"type":"DualCLIPLoaderGGUF","pos":[-40,296],"size":{"0":315,"1":106},"flags":{},"order":2,"mode":0,"outputs":[{"name":"CLIP","type":"CLIP","links":[11,12],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DualCLIPLoaderGGUF"},"widgets_values":["t5-v1_1-xxl-encoder-Q3_K_S.gguf","t5-v1_1-xxl-encoder-Q3_K_S.gguf","flux"]},{"id":6,"type":"CLIPTextEncode","pos":[342.15496826171875,190],"size":{"0":421.8337707519531,"1":133.60507202148438},"flags":{},"order":4,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":11}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["full body of a happy woman "]},{"id":5,"type":"EmptyLatentImage","pos":[449,543],"size":{"0":315,"1":106},"flags":{},"order":3,"mode":0,"outputs":[{"name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,744,1]}],"links":[[2,5,0,3,3,"LATENT"],[4,6,0,3,1,"CONDITIONING"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[9,8,0,9,0,"IMAGE"],[10,10,0,3,0,"MODEL"],[11,11,0,6,0,"CLIP"],[12,11,0,7,0,"CLIP"],[13,12,0,8,1,"VAE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8954302432552514,"offset":[-34.0074933892377,131.6207596021677]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

Gamebropro avatar Aug 26 '24 06:08 Gamebropro

facing same issue while using FLUX with lora

ohheyitskartik avatar Aug 26 '24 20:08 ohheyitskartik

I had the same issue while using Flux with Lora and got fixed by loading "clip-vit-large-patch14"

1227

this is the link, https://huggingface.co/openai/clip-vit-large-patch14/tree/main

download "model safetensors" file and save in the "clip" folder of your comfyUI, and load with Cliploader.

It worked for me but I'm not sure if we are the same case.

2002yoons avatar Aug 27 '24 07:08 2002yoons

You should not use 2 T5xxl in DualClipLoader. FLUX requires 1 T5xxl and 1 Clip_l model.

ltdrdata avatar Aug 27 '24 09:08 ltdrdata

You should not use 2 T5xxl in DualClipLoader. FLUX requires 1 T5xxl and 1 Clip_l model.

Well that's what I know too for flux, but I got the info from a Korean youtuber that is using flux guff instead of flux that requires another clip model.

https://youtu.be/DqmkyX-GXWk?si=rZFWJ7nAcEkOVeU-

And this issue accurs when I use original two clip models you mentioned with flux guff, it began to work soon I changed.

in conclusion I'm using flux guff and that error uccurs when I didn't switch the clip model.

2002yoons avatar Aug 27 '24 12:08 2002yoons