DEEPSEC
DEEPSEC copied to clipboard
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
While the idea of adversarial training is straightforward—-generate adversarial examples during training and train on those examples until the model learns to classify them correctly—-in practice it is difficult to...
Despite the simplicity of the Fast Gradient Sign Method, it is surprisingly effective at generating adversarial examples on unsecured models. However, Table XIV reports the misclassification rate of FGSM at...
Table XIII states that on CIFAR-10 the R+FGSM attack was executed with eps=0.05 and alpha=0.05 whereas the README in the Attack module of the open source code gives eps=0.1 and...
The PGD (and BIM) implementation in this repository is significantly less effective than as reported in prior work. In Table XIV PGD (or BIM) appears to succeed 82.4% (or 75.6%)...
The JSMA implementation in this repository is significantly less effective than as reported in prior work. In Table XIV JSMA appears to succeed 76% of the time. When I run...
On at least two counts the paper choses l_infinity distortion bounds that are not well motivated. - Throughout the paper the report studies a CIFAR-10 distortion of eps=0.1 and eps=0.2....
When measuring how well targeted attacks work, the metric should be targeted attack success rate. However, Table V measures model misclassification rate. This is not the right way to do...
Three of the attacks presented (EAD, CW2, and BLB) are *unbounded* attacks: rather than finding the “worst-case” (i.e., highest loss) example within some distortion bound, they seek to find the...
It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with their...