MapTR icon indicating copy to clipboard operation
MapTR copied to clipboard

Inferior re-implementation results

Open TonyXuQAQ opened this issue 2 years ago • 3 comments

Hi, I tried the training script maptr_tiny_r50_24e of your code, and use 3 3090 GPUs to run the experiments. However, the final validation mAP=44, which is much lower than mAP=50 reported in your paper. Please find the log here log_google_drive.

Is there anything that I missed? Or the number of GPUs could affect the final results (but the difference is too huge)? Could you please provide some suggestions here? Thanks for your consideration.

TonyXuQAQ avatar Mar 09 '23 00:03 TonyXuQAQ

We have not checked 3-GPU training for MapTR, all our experiments are performed with 8-GPU setting. It may be attributed to the small batch size. We recommend that you may train with larger batch size. You can also try the longer schedule, MapTR has not fully converged in 24-epoch schedule, we have observed a fluctuation of 1 mAP when we check the implementation of this repo, and we report the median result 50.0 mAP with corresponding log, which is slightly inferior to our paper version 50.3mAP.

LegendBC avatar Mar 09 '23 06:03 LegendBC

Hi,I want to ask how many days did you use to train the maptr_tiny_r50_24e model on 8-GPU setting?And I want to confirm the batch_size on each gpu is 4 , right? Thank you very much.

Zhutianyi7230 avatar Mar 12 '23 08:03 Zhutianyi7230

Hi @Zhutianyi7230 , it takes about 13 hours.

LegendBC avatar Mar 19 '23 12:03 LegendBC