About evaluation method
Hello, I'm very interested in your work. I've just come into this field. How can I get the FID value?
Hi, You need 1: install pytorch fid pip install pytorch-fid
2: Use this command python -m pytorch_fid A B where A, B are the path of two folders
Hi,can you provide me with trained SIMDCL models?
Hi,can you provide me with trained SIMDCL models?
Hi, Unfortunately, I currently do not have a copy of pretrained SimDCL models... (the HPC I'm using has a quota limit and I have to regularly delete some old files).
May I ask if the comparison experiment was retrained by you? This looks like a lot of work.
May I ask if the comparison experiment was retrained by you? This looks like a lot of work.
Sure. We only retrain some of them (CUT FASTCUT CYCLEGAN MUNIT), results of other methods are directly copied from previous papers.
Are the comparison experiments copied from previous papers (e.g. horse to zebra images) obtained by running the pre-trained model?
Are the comparison experiments copied from previous papers (e.g. horse to zebra images) obtained by running the pre-trained model? If we retrained the model, the result is based on the best one between our reproduce and the official pre-trained models. (Our repoted CycleGAN results are better than before).
Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.
Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.
No worries at all.
I suppose the training setting can be a little bit different,
Did you run the training code in the default setting? The code provided here should be the default setting for H <-> Z translation.
Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.
Sometimes the training environment (both hardware/software) may also degrade the results. But this should not be that much.
Thank you very much for replying to so many of my questions. I trained the code you provided (the dataset is horse to zebra) and got a FID of about 52. But I ran the pre-trained model you provided and got the same result as in the paper, the FID is about 43. Do you know why the gap is so wide? I think the setup of the experiment is the same as what you said in your paper.
Sometimes the training environment (both hardware/software) may also degrade the results. But this should not be that much.
Yes,I ran the training code in the default setting, and the dataset is H <-> Z. I don't understand why the results are so different. The only thing I changed was the GPU, I used three GPUs and I changed the batchsize to 3. Maybe this will have an effect? Thank you very much for your reply during this time, it has been very helpful to me.
The batchsize should be 1 for unsupervised image-to-image translation methods. (This has been studied in previous work like CycleGAN). For almost all computer vision tasks, changing the total batchsize or batch per GPU will impact the results, also, changing the batchsize means you need to scale the learning rate (I'm not sure whether the Linear Scaling Rule works here). So if you want to use batchsize of 3, you might need to x3 lr. But even after such scaling, the results can still be different.
Cheers, we have found the reasons. And thank you for reporting the results for batchsize = 3.