Question about AP in Figures 2-4
How is AP 0.5 for "chance"? I just ran the evaluation script for some of the datasets with a model without any of the weights you guys trained and I get APs that are not 0.5. Accuracy is 0.5, though.
To get the chance, first we assume we have a model that outputs real or fake by flipping a fair coin, regardless of the input. Then, for any given recall, we will get expected precision as 0.5, since anything classified as real has equal probability of being actually real or fake. We can then do the integration and find out the AP 0.5 is for chance.
Hope this helps!
Hey, thanks for the reply! The message does help, thanks. This may be a real simple question, but why not just use classification accuracy on the test set, (assuming the number of samples between real/fake are the same in your test set)? I skimmed through some of the related works that also use AP for real/fake detection, but couldn't find a straightforward response. I might be just overlooking something.