code-soup icon indicating copy to clipboard operation
code-soup copied to clipboard

Visual Perturbation Metrics

Open someshsingh22 opened this issue 4 years ago • 5 comments

For evasive whitebox or blackbox attacks, the objective of each attack is to fool the model to predict a different class but making it deceptive by making small changes, these changes are measured in distances for Example the L1/L2 Norm of difference.

Implement these metrics

  • [x] L1, L2 ... Lk Norm
  • [ ] ISSM
  • [ ] PSNR
  • [x] SAM
  • [x] SRE

You can find numpy and cv2 implementation at https://github.com/up42/image-similarity-measures/blob/master/image_similarity_measures/quality_metrics.py

someshsingh22 avatar Aug 22 '21 03:08 someshsingh22

I would like to take this up. I plan on creating a function that will take the original and the image/feature vector after the adversarial attack and output the L1, L2, L(infinity) norms. Should I also be implementing L0 norms and other less conventional norms like L3, L4.... ?

ShreeyashGo avatar Aug 23 '21 09:08 ShreeyashGo

I would like to take up this issue

devaletanmay avatar Aug 23 '21 14:08 devaletanmay

@ShreeyashGo @devaletanmay we are not using separate classes as the code doesn't look very clean with many metrics, also I am adding more metrics you can split the implementation

someshsingh22 avatar Aug 24 '21 08:08 someshsingh22

After implementing the SRE and SAM, these are my observations Test SRE: 41.36633261587073 SRE from implementation: 39.9395

Test SAM: 89.34839413786915 (with default dtype of the inputted numpy array as integer) Test SAM: 34.38530383960234 (with dtpye of the inputted numpy array changed to float) SAM from implementation: 34.385303497314453

ShreeyashGo avatar Oct 04 '21 19:10 ShreeyashGo

Ok we will change the test

someshsingh22 avatar Oct 11 '21 13:10 someshsingh22