zhaobingbingbing

Results 12 comments of zhaobingbingbing

I know how to fix it now,if you want to run test.sh, change File "main.py", line 89, FLAGS.crop_size = None to FLAGS.crop_size = 24 if you want to run inference.sh...

git clone https://github.com/CompVis/latent-diffusion and install the dependencies. you also need to set the right absolute or relative path in sample.py

Do you know how to train Insloc c4, now?

> @MatthewChoiAL you can also use Tero Karras' elucidated variant, in which case the number of steps can be lowered down to 32 with good fidelity Can I train the...

Hi,I hope to train 256*256 unet separately. For training, it's like, python imagen.py --train --source --tags_source --imagen yourmodel.pth --train_unet 2 --no_elu For sampling, it's like, python imagen.py --imagen yourmodel.pth --sample_unet...

I did not train unet1, but train unet2 separately should be possible. I noticed some tips in lucidrains/imagen-pytorch. ![image](https://user-images.githubusercontent.com/32119356/190290202-0c1602a5-61a8-4092-b744-1391d637fed9.png)

I found most transform method in DMs is like, self.transform = T.Compose([ T.Resize(image_size), T.RandomHorizontalFlip(), T.CenterCrop(image_size), ]) is padding necessary?

When I reduced the dataset from 7M to 100k, the training speed is fast, about 0.5h an epoch, however, it will cost 200h for 7M.

The default way. The problem seems to be in the data processing. When the dataset is too large, time is used to obtain data for each batch_size, rather than training....