InterGen icon indicating copy to clipboard operation
InterGen copied to clipboard

Large differences in experimental results when BATCH_SIZE = 16 and EPOCH=500

Open Xiyan-Xu opened this issue 2 years ago • 14 comments

Thanks for sharing your great work! I have trained the model myself with respect to your readme guideline, but set BATCH_SIZE = 16 and EPOCH=500 due to the lack of computing resources. In this setting, my trained model has much worse performance compared with the evaluation results presented in the paper. I am wondering if it is essential to have exact same training setting to make the model have similar performance to paper's model. Besides, could you kindly release the checkpoint that exclusively trained on the training set? I think that would be really helpful for me! Thanks for your time and patience!

Xiyan-Xu avatar Oct 30 '23 17:10 Xiyan-Xu

Sorry for that, there are some typos in evaluator.py. We have already fixed that. please make sure your code is up to date.

tr3e avatar Nov 02 '23 12:11 tr3e

Thanks for reply. I am sure my code is up to date. Can you release the checkpoint that exclusively trained on the training set? That would be really helpful.

Xiyan-Xu avatar Nov 02 '23 15:11 Xiyan-Xu

I have trained for 1500 epochs with a batch size of 16 and I have a 12.9409 in FID compared to the 5.9 reported in the paper. Is there any reason for such a difference? All the rest of the parameters in the configs files were the ones used in the training of the model reported in the paper?

Thanks :)

pabloruizponce avatar Dec 04 '23 17:12 pabloruizponce

I am figuring it out. I will contact you as soon as possible.

tr3e avatar Dec 05 '23 03:12 tr3e

@tr3e Any news on the issue? I have trained a model with same configuration as the one in your repo (except the batch size)

GENERAL:
  EXP_NAME: IG-S-8
  CHECKPOINT: ./checkpoints
  LOG_DIR: ./log

TRAIN:
  LR: 1e-4
  WEIGHT_DECAY: 0.00002
  BATCH_SIZE: 16
  EPOCH: 2000
  STEP: 1000000
  LOG_STEPS: 10
  SAVE_STEPS: 20000
  SAVE_EPOCH: 100
  RESUME: #checkpoints/IG-S/8/model/epoch=99-step=17600.ckpt
  NUM_WORKERS: 2
  MODE: finetune
  LAST_EPOCH: 0
  LAST_ITER: 0

But these are my results using your evaluation script:

========== MM Distance Summary ==========
---> [ground truth] Mean: 3.7844 CInterval: 0.0012
---> [InterGen] Mean: 3.8818 CInterval: 0.0017
========== R_precision Summary ==========
---> [ground truth](top 1) Mean: 0.4306 CInt: 0.0070;(top 2) Mean: 0.6110 CInt: 0.0086;(top 3) Mean: 0.7092 CInt: 0.0060;
---> [InterGen](top 1) Mean: 0.2517 CInt: 0.0071;(top 2) Mean: 0.3818 CInt: 0.0048;(top 3) Mean: 0.4662 CInt: 0.0046;
========== FID Summary ==========
---> [ground truth] Mean: 0.2966 CInterval: 0.0085
---> [InterGen] Mean: 10.7803 CInterval: 0.1791
========== Diversity Summary ==========
---> [ground truth] Mean: 7.7673 CInterval: 0.0440
---> [InterGen] Mean: 7.8075 CInterval: 0.0274
========== MultiModality Summary ==========
---> [InterGen] Mean: 1.5340 CInterval: 0.0615

As you can observe, the results are very distant from the ones provided in the paper. I am in an ongoing research using your dataset, but in order to make a fair comparison, we need to be able to replicate your results.

Hope you find what's going on :)

pabloruizponce avatar Dec 21 '23 17:12 pabloruizponce

Hello! I have run the newest training code exactly in this repo with a batch size of 64 (32 for each of 2 GPUs) for 1500 epochs. The results are like this:

========== MM Distance Summary ========== ---> [ground truth] Mean: 3.7847 CInterval: 0.0007 ---> [InterGen] Mean: 4.1817 CInterval: 0.0009 ========== R_precision Summary ========== ---> [ground truth](top 1) Mean: 0.4248 CInt: 0.0046;(top 2) Mean: 0.6036 CInt: 0.0044;(top 3) Mean: 0.7026 CInt: 0.0047; ---> [InterGen](top 1) Mean: 0.3785 CInt: 0.0052;(top 2) Mean: 0.5163 CInt: 0.0040;(top 3) Mean: 0.6350 CInt: 0.0032; ========== FID Summary ========== ---> [ground truth] Mean: 0.2981 CInterval: 0.0057 ---> [InterGen] Mean: 5.8447 CInterval: 0.0735 ========== Diversity Summary ========== ---> [ground truth] Mean: 7.7516 CInterval: 0.0163 ---> [InterGen] Mean: 7.8750 CInterval: 0.0324 ========== MultiModality Summary ========== ---> [InterGen] Mean: 1.5634 CInterval: 0.0334

We suggest that you can update to the newest code, and kindly increase the batch size.

tr3e avatar Dec 23 '23 09:12 tr3e

@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue?

pabloruizponce avatar Jan 08 '24 11:01 pabloruizponce

@tr3e I am still unable to replicate the results. Can you provide me with some contact method to talk with you and not fill this issue?

me too.

Xiyan-Xu avatar Jan 08 '24 16:01 Xiyan-Xu

my email is [email protected] :)

tr3e avatar Jan 09 '24 02:01 tr3e

Hello! I have run the newest training code exactly in this repo with a batch size of 64 (32 for each of 2 GPUs) for 1500 epochs. The results are like this:

========== MM Distance Summary ========== ---> [ground truth] Mean: 3.7847 CInterval: 0.0007 ---> [InterGen] Mean: 4.1817 CInterval: 0.0009 ========== R_precision Summary ========== ---> [ground truth](top 1) Mean: 0.4248 CInt: 0.0046;(top 2) Mean: 0.6036 CInt: 0.0044;(top 3) Mean: 0.7026 CInt: 0.0047; ---> [InterGen](top 1) Mean: 0.3785 CInt: 0.0052;(top 2) Mean: 0.5163 CInt: 0.0040;(top 3) Mean: 0.6350 CInt: 0.0032; ========== FID Summary ========== ---> [ground truth] Mean: 0.2981 CInterval: 0.0057 ---> [InterGen] Mean: 5.8447 CInterval: 0.0735 ========== Diversity Summary ========== ---> [ground truth] Mean: 7.7516 CInterval: 0.0163 ---> [InterGen] Mean: 7.8750 CInterval: 0.0324 ========== MultiModality Summary ========== ---> [InterGen] Mean: 1.5634 CInterval: 0.0334

We suggest that you can update to the newest code, and kindly increase the batch size.

Hi, I found that the MMDist here is lower than what is presented in the paper. When I am reproducing your work as well as my model, this MMDist is always around 4. Is there any mistake in the calculation?

szqwu avatar May 06 '24 01:05 szqwu

屏幕截图 2024-06-21 110045 The R_precision of InterGen that I reproduced is always higher than that of GT. Does anyone know the reason for this? Thank you very much.

RunqiWang77 avatar Jun 21 '24 03:06 RunqiWang77

hi, guys! Wondering how you guys train the model. I have trained for 2000 epochs with a batch size of 16 on 4 GPUs. But I got a bad result. Could you give me some advice? image

nancy-ux avatar Aug 16 '24 06:08 nancy-ux

hi, guys! Wondering how you guys train the model. I have trained for 2000 epochs with a batch size of 16 on 4 GPUs. But I got a bad result. Could you give me some advice? image

I got similar results. Have you resolved this problem?

blue-blue272 avatar Oct 12 '24 09:10 blue-blue272

hi, guys! Wondering how you guys train the model. I have trained for 2000 epochs with a batch size of 16 on 4 GPUs. But I got a bad result. Could you give me some advice? image

I got similar results. Have you resolved this problem?

Maybe I changed the code unconsciously. I didn't find the bug. But I reload the code and try to train it. And then everything is okay.

nancy-ux avatar Oct 15 '24 12:10 nancy-ux