The attack is performed on GT label and not on the label predicted by the model on the clean image
I really appreciate yor nice work! But I think there is an error: the attack is performed on the GT label and not on the label predicted by the model on the clean image.
Hope to get your answer, thank you!
We appreciate your interest in our work!
This is not an error, but there are two settings. It is correct as long as it is maintained that all methods use the same setting. The methods in this paper and all comparative methods use GT to generate adversarial examples and evaluate performance.
Ok thank you. I have another question, how do you calculate the success rates results in Table 2 for clean and adversarial in the paper? In the clean conf. the success rate is between the prediction of the classifier with clean image and the prediction of the defense algo with clean image as input? In the other conf. the success rate is between the prediction of the classifier with adv image and the prediction of the defense algo with adv image as input?
Thank you!