CNNDetection icon indicating copy to clipboard operation
CNNDetection copied to clipboard

Question about AP in Figures 2-4

Open jlim13 opened this issue 5 years ago • 2 comments

How is AP 0.5 for "chance"? I just ran the evaluation script for some of the datasets with a model without any of the weights you guys trained and I get APs that are not 0.5. Accuracy is 0.5, though.

jlim13 avatar Dec 01 '20 04:12 jlim13

To get the chance, first we assume we have a model that outputs real or fake by flipping a fair coin, regardless of the input. Then, for any given recall, we will get expected precision as 0.5, since anything classified as real has equal probability of being actually real or fake. We can then do the integration and find out the AP 0.5 is for chance.

Hope this helps!

PeterWang512 avatar Dec 02 '20 06:12 PeterWang512

Hey, thanks for the reply! The message does help, thanks. This may be a real simple question, but why not just use classification accuracy on the test set, (assuming the number of samples between real/fake are the same in your test set)? I skimmed through some of the related works that also use AP for real/fake detection, but couldn't find a straightforward response. I might be just overlooking something.

jlim13 avatar Dec 02 '20 18:12 jlim13