Any plans on releasing layout-to-image inference code and weights?
Hi,
First of all, great work! And thanks for releasing codebase that is intuitive to follow and easy to setup. I was browsing through the paper and saw interesting results on layout-to-image synthesis (Fig 16 / Table 9). Do you plan to release the code and weights for it?
Thanks!
I would also like to reproduce the layout-to-image model. I made a similar issue comment in the original repository for the latent diffusion model https://github.com/CompVis/latent-diffusion/issues/120#issuecomment-1228162537 and am eagerly awaiting the reproduction code. Thank you in advance for your consideration.
Also waiting for the release of the pretrained layout-to-image model trained from scratch on COCO and the dataet code. Thanks !!
Has anybody trained a model for the layout2image task yet? I'm not quiet sure how my Bounding boxes input is supposed to look like. And what a proper configuration would be? Thank you so much for any inputs. I know the layout2img-openimages256 config exists, but I'm not sure how the input is supposed to be.
Hi I am also waiting for layout2image model. Are you still planning to release it?
Has anybody trained a model for the layout2image task yet? I'm not quiet sure how my Bounding boxes input is supposed to look like. And what a proper configuration would be? Thank you so much for any inputs. I know the layout2img-openimages256 config exists, but I'm not sure how the input is supposed to be.
It may help you https://github.com/CreamyLong/stable-diffusion/blob/master/scripts/layout2img.py
Also waiting for the release of the pretrained layout-to-image model trained from scratch on COCO and the dataet code. Thanks !! I found it trained on openimages256
wget -O models/ldm/layout2img-openimages256/model.zip https://ommer-lab.com/files/latent-diffusion/layout2img_model.zip
Has anybody trained a model for the layout2image task yet? I'm not quiet sure how my Bounding boxes input is supposed to look like. And what a proper configuration would be? Thank you so much for any inputs. I know the layout2img-openimages256 config exists, but I'm not sure how the input is supposed to be.
It may help you https://github.com/CreamyLong/stable-diffusion/blob/master/scripts/layout2img.py
Is your idea here to generate using the batches in a train dataset as input? What if you wanted to generate images with bounding boxes / class that are not in the dataset class you set in your script?
Also waiting for the release of the pretrained layout-to-image model trained from scratch on COCO and the dataet code. Thanks !! I found it trained on openimages256
wget -O models/ldm/layout2img-openimages256/model.zip https://ommer-lab.com/files/latent-diffusion/layout2img_model.zip
@CreamyLong have you successfully used this model? I used this model with the layout2img.py script you provided, but the result is just as while noise.