garyhxfang
garyhxfang
@haofanwang master could also have a check if you are interested in this topic haha.
@sayakpaul Wow thanks a lot, let me have a try. Can it also support the img2img inference? I have checked the docs and found that it seems not having the...
yes, i have two use cases need to implement, txt2img & img2img.
for the img2img inference I'm currently using **StableDiffusionImg2ImgPipeline**
Noted, thanks a lot! Let me have a try.
@Skquark I have tried Long Prompt Weighting LPW community pipe, the result works well but it's too unstable to be used in live environment, it often stuck when I call...
> Well, SEGA natively supports the first one i.e., text2image. For image2image, I believe you could: > > * first obtain an inverted noise using the newly introduced [DDIMInverseScheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/ddim_inverse). An...
> @garyhxfang It's been stable for me, I haven't noticed it getting stuck and I've been using it as my primary for months. I have made minor mods to it,...
> The way I did mine is to copy it as pipeline.py in my HuggingFace models, then while calling pretrained I set custom_pipeline="AlanB/lpw_stable_diffusion_mod" and it'll come from there instead. That...
@haofanwang Hi, master. If I would like to bake 2 LoRA into model, can I just run the script two times?: first time bake the LoRA X into model A...