Junlin Han

Results 78 comments of Junlin Han

Hello Liuzhen, Thanks for your attention in our work! 1: 50ep/100ep If possible, would you mind sharing more details (traiing setting) ? Especially, learning rate, batch_size, and total training epoch...

> Hello again, 1: Oh most setting in this repo follows CycleGAN, so it's ok to follow CycleGAN. Batch_size = 1 would be better Regarding FID: Usually, the highest FID...

Hello, Yes, but you may need to add some paired supervision (L1/L2/VGG/SSIM losses).

Hello Paul, Oh I didn't check the cases for batch_size >1, due to batch_size >1 will degrade the performance (not only DCLGAN/SimDCL, but fairly a large percentage of unsupervised I2I...

> Hi Junlin, oh, good to know! On CUT I observed fewer mode collapses with batch_size > 1, so I thought that might also benefit DCLGAN and SimDCL. :D The...

Hello Zhenyu, Many thanks for your kind words! For best results, you may record the FID score every epoch. But better FID score does not always suggests better translation. Hence...

> Thanks for your reply! And I try to train the code on the winter2summer dataset and found that DCL's FID >SimDCL's FID. This doesn't seem consistent with the fact...

Hello, Yes, DCLGAN suffers from mode collapses in such datasets (see section 5.3 of the paper for a discussion). CycleGAN/SimDCL might performs better.

@hd201708010401 Hello, Oh I do not have a copy of this pre-trained model. Did you use the default training setting? If possible, could you share your training log? Be careful...

Hi Victor, Thanks for your nice workds! For resume training, you might add --continue_train --epoch_count N, where N is the current epoch of the pretrained model. continue_train will load your...