MixPL icon indicating copy to clipboard operation
MixPL copied to clipboard

About reproduction results with Faster-RCNN on 10% label data on coco dataset

Open tamama9018 opened this issue 1 year ago • 8 comments

Thanks for your great work! I reproduce results on 10% label data on coco dataset for Faster-RCNN and mAP gave the following results.

2024/03/15 20:47:31 - mmengine - INFO - bbox_mAP_copypaste: 0.112 0.226 0.097 0.054 0.123 0.151
2024/03/15 20:47:31 - mmengine - INFO - Iter(val) [5000/5000]    teacher/coco/bbox_mAP: 0.1490  teacher/coco/bbox_mAP_50: 0.2720  teacher/coco/bbox_mAP_75: 0.1480  teacher/coco/bbox_mAP_s: 0.0790  teacher/coco/bbox_mAP_m: 0.1580  teacher/coco/bbox_mAP_l: 0.2010  student/coco/bbox_mAP: 0.1120  student/coco/bbox_mAP_50: 0.2260  student/coco/bbox_mAP_75: 0.0970  student/coco/bbox_mAP_s: 0.0540  student/coco/bbox_mAP_m: 0.1230  student/coco/bbox_mAP_l: 0.1510  data_time: 0.0071  time: 0.0399
2024/03/15 20:47:31 - mmengine - INFO - Saving checkpoint at 1 epochs

This is my train log: 20240314_032925.log

I don't seem to have reached the mAP described in the paper (37.16 ± 0.15 ), am I doing it right? I would be happy to receive a reply.

tamama9018 avatar May 09 '24 03:05 tamama9018

https://huggingface.co/czm369/MixPL/tree/main/mixpl_faster-rcnn_r50-caffe_fpn_180k_coco-s1-p10.py

Czm369 avatar May 09 '24 12:05 Czm369

Which script did you use to split the COCO data set? Also, which Validation dataset did you use?

tamama9018 avatar May 13 '24 09:05 tamama9018

Thanks for your great work! I reproduce results on 10% label data on coco dataset for Faster-RCNN and mAP gave the following results.

2024/03/15 20:47:31 - mmengine - INFO - bbox_mAP_copypaste: 0.112 0.226 0.097 0.054 0.123 0.151
2024/03/15 20:47:31 - mmengine - INFO - Iter(val) [5000/5000]    teacher/coco/bbox_mAP: 0.1490  teacher/coco/bbox_mAP_50: 0.2720  teacher/coco/bbox_mAP_75: 0.1480  teacher/coco/bbox_mAP_s: 0.0790  teacher/coco/bbox_mAP_m: 0.1580  teacher/coco/bbox_mAP_l: 0.2010  student/coco/bbox_mAP: 0.1120  student/coco/bbox_mAP_50: 0.2260  student/coco/bbox_mAP_75: 0.0970  student/coco/bbox_mAP_s: 0.0540  student/coco/bbox_mAP_m: 0.1230  student/coco/bbox_mAP_l: 0.1510  data_time: 0.0071  time: 0.0399
2024/03/15 20:47:31 - mmengine - INFO - Saving checkpoint at 1 epochs

This is my train log: 20240314_032925.log

I don't seem to have reached the mAP described in the paper (37.16 ± 0.15 ), am I doing it right? I would be happy to receive a reply.

Hello,

Hope you are doing well! I am also getting the same results and I am not sure what next? Could you please help me if you know about this?

Thanks, Bharani.

bharanibala avatar Jun 18 '24 20:06 bharanibala

https://huggingface.co/czm369/MixPL/tree/main/mixpl_faster-rcnn_r50-caffe_fpn_180k_coco-s1-p10.py

Hello,

Thanks for your great work on the algorithm! I am following your approach. But, I am getting TypeError: MeanTeacherHook.init() got an unexpected keyword argument 'gamma'. Could you please suggest me a work around?

Thanks, Bharani.

bharanibala avatar Jun 20 '24 06:06 bharanibala

https://huggingface.co/czm369/MixPL/tree/main/mixpl_faster-rcnn_r50-caffe_fpn_180k_coco-s1-p10.py

Hi,

I could not find the AnnealMeanTeacherHook module. Is it fine if we use the MeanTeacherHook instead? Could you please help me on this?

Thanks, Bharani.

bharanibala avatar Jul 01 '24 20:07 bharanibala

AnnealMeanTeacherHook just add a linear warmup for MeanTeacher, so you can use the MeanTeacherHook and not affect performance.

Czm369 avatar Jul 02 '24 03:07 Czm369

Hi, I'm facing the same issue. I've used the config from https://huggingface.co/czm369/MixPL/tree/main/mixpl_faster-rcnn_r50-caffe_fpn_180k_coco-s1-p10.py, replacing AnnealMeanTeacherHook with MeanTeacherHook but leaving the file untouched otherwise. I'm getting the following results:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.110 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.225 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.098 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.057 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.119 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.146 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.238 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.238 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.238 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.102 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.248 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.327 08/02 06:47:39 - mmengine - INFO - bbox_mAP_copypaste: 0.110 0.225 0.098 0.057 0.119 0.146 08/02 06:47:40 - mmengine - INFO - Iter(val) [5000/5000] teacher/coco/bbox_mAP: 0.1470 teacher/coco/bbox_mAP_50: 0.2680 teacher/coco/bbox_mAP_75: 0.1480 teacher/coco/bbox_mAP_s: 0.0780 teacher/coco/bbox_mAP_m: 0.1550 teacher/coco/bbox_mAP_l: 0.1950 student/coco/bbox_mAP: 0.1100 student/coco/bbox_mAP_50: 0.2250 student/coco/bbox_mAP_75: 0.0980 student/coco/bbox_mAP_s: 0.0570 student/coco/bbox_mAP_m: 0.1190 student/coco/bbox_mAP_l: 0.1460 data_time: 0.0103 time: 0.0833

These seem to be in line with @tamama9018 's results, but are quite a bit lower than the numbers from the paper.

FreddiEichhorn avatar Aug 02 '24 09:08 FreddiEichhorn

I have yet to solve this problem. One question: why is it that COCO val2017 is supposed to be a 5000 data set, but your log on huntingface seems to have a val data set of 625? I would appreciate it if you could reply.

2023/04/21 17:55:39 - mmengine - INFO - Iter(val) [ 50/625]    eta: 0:00:32  time: 0.0562  data_time: 0.0058  memory: 1283  ```

tamama9018 avatar Oct 02 '24 08:10 tamama9018