Im getting just messy images.
Idk what im doing wrong. Im using protogen fp16 with fp16 yaml. Then tried img2img, but all i got is this. Prompt "Pretty girl" or something. The most coherent thing i got is a very deformed main picture at 0.1 denoising, wich doesn`t make sense.

Can I see your workflow?
Can I see your workflow?
I copied it verbatim from img2img example, with only yaml and euler A changes. I checked few times, but there is nothing different. https://comfyanonymous.github.io/ComfyUI_examples/img2img/
I got the cuda error and then fixed it as is explained. Not much less. Thats my first try. This is denoising 0.1:

I got your workflow from the images you posted. The issue is that your ProtoGen_X3.4 does not contain any CLIP model weights.
Use another checkpoint or you can download the SD1.5 CLIP model weights from here: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/text_encoder/model.safetensors
Put them in the models/clip folder and use the CLIPLoader node to load them and connect it to your CLIPTextEncode nodes. Then it should work.
Thank you. I'll try it later. My interest is to be able to experiment with multiple controlnets, that fulfill specific functions. As well as the possibility of using multiple img2img. I don't know if it's possible, but your gui seems flexible enough to try. When there is a native batch I will surely experiment with it.
Yes you can use multiple controlnets and run as many img2img passes as you want with different settings/models.
It got better with your advice. Regarding multiple simultaneous img2img, I have not found the mechanics. My interest is to achieve the same thing as controlnet + img2img, where one influences the other during the same inference. Maybe it would be useful to know how to achieve that first, since it's a process that actually works in the gui.
Take the controlnet example here: https://comfyanonymous.github.io/ComfyUI_examples/controlnet/
And instead of feeding the sampler an EmptyLatentImage feed it an image like in this example: https://comfyanonymous.github.io/ComfyUI_examples/img2img/
Is this what you want to do?
I think that's the mechanics. I confirm it later. Now I was trying a way of "almost" double img2img, and got an image of what I wanted while respecting the light and color composition of the other. i`ll keep trying.