CRAFT-Reimplementation icon indicating copy to clipboard operation
CRAFT-Reimplementation copied to clipboard

The test accuracy is lower than README.md mentioned

Open laeglaur opened this issue 5 years ago • 5 comments

Thanks for reimplementation. I use the release weights to run the model, but the test results are lower than README.md mentioned.

text_threshold=0.7 low_text=0.4 link_threshold=0.4 Syndata+IC13+IC17 test on icdar2013: "precision": 0.8733264675592173, "recall": 0.7744292237442922, "hmean": 0.8209099709583736
Syndata+IC15 test on icdar2015: "precision": 0.8037280701754386, "recall": 0.705825710158883, "hmean": 0.7516021532940271

Is there something wrong with the weights?

laeglaur avatar Dec 21 '20 09:12 laeglaur

Thanks for reimplementation. I use the release weights to run the model, but the test results are lower than README.md mentioned.

text_threshold=0.7 low_text=0.4 link_threshold=0.4 Syndata+IC13+IC17 test on icdar2013: "precision": 0.8733264675592173, "recall": 0.7744292237442922, "hmean": 0.8209099709583736 Syndata+IC15 test on icdar2015: "precision": 0.8037280701754386, "recall": 0.705825710158883, "hmean": 0.7516021532940271

Is there something wrong with the weights?

I test Syndata.pth on ICDAR13 test set, and the score is lower than README.md mentioned {"precision": 0.5976377952755906, "recall": 0.6931506849315069, "hmean": 0.641860465116279, "AP": 0}

I tried to train Syndata.pth and tested it on ICDAR13 test set, the result is {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0}

PS, one epoch's log with 10 validation outputs is as follows:

2021/08/16 01:22:16 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5422477440525021, "recall": 0.6036529680365297, "hmean": 0.571305099394987, "AP": 0} 2021/08/16 07:16:50 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6194690265486725, "recall": 0.5753424657534246, "hmean": 0.5965909090909091, "AP": 0} 2021/08/16 13:40:54 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5865470852017938, "recall": 0.5972602739726027, "hmean": 0.5918552036199095, "AP": 0} 2021/08/16 20:29:51 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6377799415774099, "recall": 0.5981735159817352, "hmean": 0.6173421300659755, "AP": 0} 2021/08/17 03:35:45 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6593291404612159, "recall": 0.5744292237442923, "hmean": 0.6139580283064909, "AP": 0} 2021/08/17 11:02:21 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0} 2021/08/17 18:36:44 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5971760797342193, "recall": 0.6566210045662101, "hmean": 0.6254893431926924, "AP": 0} 2021/08/18 02:12:47 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5579029733959311, "recall": 0.6511415525114155, "hmean": 0.6009270965023177, "AP": 0} 2021/08/18 09:55:10 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.594, "recall": 0.5424657534246575, "hmean": 0.5670644391408114, "AP": 0} 2021/08/18 17:39:25 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5416963649322879, "recall": 0.6940639269406392, "hmean": 0.6084867894315452, "AP": 0}

madajie9 avatar Aug 19 '21 13:08 madajie9

Hi,

I think this is not authors' fault. I use pretrained model provided by authors on Synthtext and continue training on icidar2015. I get

{"precision": 0.8537688442211055, "recall": 0.8180067404910929, "hmean": 0.8355052864519302, "AP": 0}

Lovegood-1 avatar Nov 02 '21 02:11 Lovegood-1

Hi,

I think this is not authors' fault. I use pretrained model provided by authors on Synthtext and continue training on icidar2015. I get

{"precision": 0.8537688442211055, "recall": 0.8180067404910929, "hmean": 0.8355052864519302, "AP": 0}

thank you very much for replying! do you use "new gaussian map method" option in the training script?

madajie9 avatar Nov 02 '21 06:11 madajie9

Hi, I think this is not authors' fault. I use pretrained model provided by authors on Synthtext and continue training on icidar2015. I get

{"precision": 0.8537688442211055, "recall": 0.8180067404910929, "hmean": 0.8355052864519302, "AP": 0}

thank you very much for replying! do you use "new gaussian map method" option in the training script?

Can you show me the Line in code where "new gaussian map method" is used? I just run official training code 'trainic15data.py' without any change. So, emm, maybe I use the option if they set it as default.

Lovegood-1 avatar Nov 04 '21 07:11 Lovegood-1

Thanks for reimplementation. I use the release weights to run the model, but the test results are lower than README.md mentioned. text_threshold=0.7 low_text=0.4 link_threshold=0.4 Syndata+IC13+IC17 test on icdar2013: "precision": 0.8733264675592173, "recall": 0.7744292237442922, "hmean": 0.8209099709583736 Syndata+IC15 test on icdar2015: "precision": 0.8037280701754386, "recall": 0.705825710158883, "hmean": 0.7516021532940271 Is there something wrong with the weights?

I test Syndata.pth on ICDAR13 test set, and the score is lower than README.md mentioned {"precision": 0.5976377952755906, "recall": 0.6931506849315069, "hmean": 0.641860465116279, "AP": 0}

I tried to train Syndata.pth and tested it on ICDAR13 test set, the result is {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0}

PS, one epoch's log with 10 validation outputs is as follows:

2021/08/16 01:22:16 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5422477440525021, "recall": 0.6036529680365297, "hmean": 0.571305099394987, "AP": 0} 2021/08/16 07:16:50 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6194690265486725, "recall": 0.5753424657534246, "hmean": 0.5965909090909091, "AP": 0} 2021/08/16 13:40:54 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5865470852017938, "recall": 0.5972602739726027, "hmean": 0.5918552036199095, "AP": 0} 2021/08/16 20:29:51 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6377799415774099, "recall": 0.5981735159817352, "hmean": 0.6173421300659755, "AP": 0} 2021/08/17 03:35:45 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6593291404612159, "recall": 0.5744292237442923, "hmean": 0.6139580283064909, "AP": 0} 2021/08/17 11:02:21 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0} 2021/08/17 18:36:44 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5971760797342193, "recall": 0.6566210045662101, "hmean": 0.6254893431926924, "AP": 0} 2021/08/18 02:12:47 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5579029733959311, "recall": 0.6511415525114155, "hmean": 0.6009270965023177, "AP": 0} 2021/08/18 09:55:10 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.594, "recall": 0.5424657534246575, "hmean": 0.5670644391408114, "AP": 0} 2021/08/18 17:39:25 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5416963649322879, "recall": 0.6940639269406392, "hmean": 0.6084867894315452, "AP": 0}

hi, could you teach me how evaluate my model, i'm so noob and can't understand the eval/script.py. thank u very very much!

wangbi0912 avatar Apr 08 '22 01:04 wangbi0912