StableDiffusion-PyTorch icon indicating copy to clipboard operation
StableDiffusion-PyTorch copied to clipboard

How to improve text-conditioned generation?

Open Nikita-Sherstnev opened this issue 1 year ago • 1 comments

I see that model not very good at text conditioned generation. How to improve this situation? Maybe train CLIP model itself, or just train ldm for longer?

Nikita-Sherstnev avatar Jun 30 '24 08:06 Nikita-Sherstnev

When I trained this on Celeb captions, I also found that for the captions that are very common(like hair color), the trained text conditioned diffusion model was performing very well for them. But for words which weren't quite frequent , the model wasn't honouring them at all. I suspect training the ldm longer(or getting more images for the infrequent captions) should indeed improve the generation results for them. You can definitely try training CLIP as well but I feel unless you have very rare words in your captions (or maybe very different from what clip was trained on), training ldm for longer should be more fruitful than training CLIP model.

explainingai-code avatar Jul 01 '24 16:07 explainingai-code