DCLGAN icon indicating copy to clipboard operation
DCLGAN copied to clipboard

About evaluation method

Open 33KU opened this issue 4 years ago • 12 comments

Hello, I'm very interested in your work. I've just come into this field. How can I get the FID value?

33KU avatar Dec 08 '21 14:12 33KU

Hi, You need 1: install pytorch fid pip install pytorch-fid

2: Use this command python -m pytorch_fid A B where A, B are the path of two folders

JunlinHan avatar Dec 09 '21 06:12 JunlinHan

Hi,can you provide me with trained SIMDCL models?

33KU avatar Dec 30 '21 13:12 33KU

Hi,can you provide me with trained SIMDCL models?

Hi, Unfortunately, I currently do not have a copy of pretrained SimDCL models... (the HPC I'm using has a quota limit and I have to regularly delete some old files).

JunlinHan avatar Dec 30 '21 13:12 JunlinHan

May I ask if the comparison experiment was retrained by you? This looks like a lot of work.

33KU avatar Jan 08 '22 13:01 33KU

May I ask if the comparison experiment was retrained by you? This looks like a lot of work.

Sure. We only retrain some of them (CUT FASTCUT CYCLEGAN MUNIT), results of other methods are directly copied from previous papers.

JunlinHan avatar Jan 08 '22 13:01 JunlinHan

Are the comparison experiments copied from previous papers (e.g. horse to zebra images) obtained by running the pre-trained model?

33KU avatar Jan 08 '22 13:01 33KU

Are the comparison experiments copied from previous papers (e.g. horse to zebra images) obtained by running the pre-trained model? If we retrained the model, the result is based on the best one between our reproduce and the official pre-trained models. (Our repoted CycleGAN results are better than before).

JunlinHan avatar Jan 08 '22 14:01 JunlinHan

Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.

33KU avatar Jan 10 '22 09:01 33KU

Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.

No worries at all.

I suppose the training setting can be a little bit different,

Did you run the training code in the default setting? The code provided here should be the default setting for H <-> Z translation.

JunlinHan avatar Jan 10 '22 12:01 JunlinHan

Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.

Sometimes the training environment (both hardware/software) may also degrade the results. But this should not be that much.

JunlinHan avatar Jan 10 '22 12:01 JunlinHan

Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.

Sometimes the training environment (both hardware/software) may also degrade the results. But this should not be that much.

Yes,I ran the training code in the default setting, and the dataset is H <-> Z. I don't understand why the results are so different. The only thing I changed was the GPU, I used three GPUs and I changed the batchsize to 3. Maybe this will have an effect? Thank you very much for your reply during this time, it has been very helpful to me.

33KU avatar Jan 10 '22 12:01 33KU

The batchsize should be 1 for unsupervised image-to-image translation methods. (This has been studied in previous work like CycleGAN). For almost all computer vision tasks, changing the total batchsize or batch per GPU will impact the results, also, changing the batchsize means you need to scale the learning rate (I'm not sure whether the Linear Scaling Rule works here). So if you want to use batchsize of 3, you might need to x3 lr. But even after such scaling, the results can still be different.

Cheers, we have found the reasons. And thank you for reporting the results for batchsize = 3.

JunlinHan avatar Jan 10 '22 12:01 JunlinHan