Venkat Krishnan
Venkat Krishnan
I'm having the same issue - any updates?
Hi, I was wondering if you had an update on if you were able to get better results?
Would using stable_txt2img.py instead of txt2img.py and SD embeddings give better results?
Yes, I believe so.
I also tried using seed_everything form the pytorch library but that still isn't producing reproducible models. Could there be variability between machines.
Thank you for the fast response, can you elaborate on this? I am training both the text encoder and unet together and specifying a single --seed argument to the train_dreambooth.py...
Looking at the colab code, the seed value from the box is passed into the --seed parameter of train_dreambooth.py as $Seed. The only difference I see between the way I'm...
Could this be related to https://pytorch.org/docs/stable/notes/randomness.html? I'm running these across different A10G's on AWS and have this issue. If I run repeatedly on single A10G it seems fine.
I think it is because Pytorch can be non deterministic: https://pytorch.org/docs/stable/notes/randomness.html
For some reason when I run it on my custom computer with the same flags, it takes 24 GB. Do you know any settings or missing installations that could case...