missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
To see the GUI go to: https://127.0.0.1:8189 FETCH DATA from: E:\ComfyUI_Pro\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json got prompt model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SDXLClipModel Loading 1 new model Requested to load SDXL Loading 1 new model 60%|█████████████████████████████████████████████████▏ | 12/20 [00:54<00:36, 4.53s/it]
不知道你是否能看明白,我在每次更改完模型运行的时候都会出现这个报错信息,我该怎么办?
不知道你是否能看明白,我在每次更改完模型运行的时候都会出现这个报错信息,我该怎么办? 我也是,一样的情况,然后有时候启动后点击右侧的Queue Prompt然后就报错了,一直显示重新连接
I also have the same error, but the image is drawn.
got prompt model_type EPS adm 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SDXLClipModel Loading 1 new model Requested to load SDXL Loading 1 new model 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.39it/s] Requested to load AutoencoderKL Loading 1 new model Prompt executed in 15.50 seconds
got prompt Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates'] model_type EPS adm 0 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['model_ema.decay', 'model_ema.num_updates', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
I have the same problem
So no one? mmmmm
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} is debug info . I started receiving it at the jan 22 update of confyui , AFAIk i can be ignored safely. Only putting this here because it's #1 on google search results ;)
Using pytorch attention in VAE got prompt model_type EPS adm 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
yes i am getting the same , also the speed of comfy UI varies wildly , does it have something to do with this , sometimes its at 8 it/sec and sometimes it goes as low as 5 seconds per iteration , on my 3060 ,
@sanjuhs that error is not an error, it's a warning and doesn't impact comfy speed. I'm not sure why it's not closed yet - it's a non issue.