Need Info/Advice On Using InPaint ControlNet
The group that I am working with have most of the new ControlNet pipeline working nicely in a Swift GUI app test build. A big thank you to everyone involved in making this possible. We are going in circles with the InPaint ControlNet, however. We don't understand how to get InPaint to recognize the mask (the area to be inpainted) or perhaps that functionality is beyond the present capabilities of the CN pipeline (i.e: there is no --mask-image or --input-mask type argument). Any info or advice would be greatly appreciated. TIA.
me too! Would love any information on this.
Same! Would also love any kind of insight into what needs to happen to get this to work.
One of the developers in the Mochi Diffusion app group has gotten the InPaint ControlNet to work with an (alpha) masked starting image. It is just half a dozen lines of code, but the mod is directly to a .swift file in ml-stable-diffusion. We are still on v.0.4.0, but it looks like the edit will carry right through to v.1.0.0. He needs some more testing time before I can link to his code, or before he does a PR here, but it should be pretty soon one way or the other. Same fellow who did the fix for "img2img starting image shape" a few months ago here.
https://github.com/apple/ml-stable-diffusion/pull/205#top
Closed by https://github.com/apple/ml-stable-diffusion/pull/205