robustbench icon indicating copy to clipboard operation
robustbench copied to clipboard

[New Model] <Amini2024MeanSparse>

Open mrteymoorian opened this issue 1 year ago • 4 comments

Paper Information

  • Paper Title: MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification
  • Paper URL: https://arxiv.org/pdf/2406.05927
  • Paper authors: Sajjad Amini, Mohammadreza Teymoorianfard, Shiqing Ma, Amir Houmansadr

Leaderboard Claim(s)

Models 1 through 5 represent the new SOA models for CIFAR-10 Linf, CIFAR-10 L2, CIFAR-100 Linf, and ImageNet Linf, respectively.

Model 5, which we previously submitted and is currently ranked second in CIFAR-10 Linf, has been updated with improved results. We request an update to our model's results on the leaderboard to reflect these enhancements.

We have also submitted a pull request.

Model 1

  • Architecture: Meansparse_WRN-94-16
  • Dataset: CIFAR-10
  • Threat Model: Linf
  • eps: 8/255
  • Clean accuracy: 93.63
  • Robust accuracy: 75.28
  • Additional data: false
  • Evaluation method: AutoAttack
  • Checkpoint and code: Checkpoint and code

Model 2

  • Architecture: Meansparse_WRN-70-16
  • Dataset: CIFAR-10
  • Threat Model: L2
  • eps: 0.5
  • Clean accuracy: 95.49
  • Robust accuracy: 87.28
  • Additional data: false
  • Evaluation method: AutoAttack
  • Checkpoint and code: Checkpoint and code

Model 3

  • Architecture: Meansparse_WRN-70-16
  • Dataset: CIFAR-100
  • Threat Model: Linf
  • eps: 8/255
  • Clean accuracy: 75.17
  • Robust accuracy: 44.78
  • Additional data: false
  • Evaluation method: AutoAttack
  • Checkpoint and code: Checkpoint and code

Model 4

  • Architecture: Meansparse_Swin_L
  • Dataset: ImageNet
  • Threat Model: Linf
  • eps: 4/255
  • Clean accuracy: 78.86
  • Robust accuracy: 62.12
  • Additional data: false
  • Evaluation method: AutoAttack
  • Checkpoint and code: Checkpoint and code

Model 5

  • Architecture: meansparse_ra_wrn70_16
  • Dataset: CIFAR-10
  • Threat Model: Linf
  • eps: 8/255
  • Clean accuracy: 93.27
  • Robust accuracy: 72.78
  • Additional data: false
  • Evaluation method: AutoAttack
  • Checkpoint and code: Checkpoint and code

Model Zoo:

  • [x] I want to add my models to the Model Zoo (check if true)
  • [x] I use an architecture that is included among those here or in timm. If not, I added the link to the architecture implementation so that it can be added.
  • [x] I agree to release my model(s) under MIT license (check if true) OR under a custom license, located here: (put the custom license URL here if a custom license is needed. If no URL is specified, we assume that you are fine with MIT)

mrteymoorian avatar Aug 16 '24 19:08 mrteymoorian

Hi,

thanks for the submission! I'll have a look in the next days.

fra31 avatar Aug 19 '24 09:08 fra31

Hello,

I hope you are well. I wanted to kindly ask if you could review the new models when you have a chance. It's been about two months since we submitted them, and your feedback would be greatly appreciated.

Thank you!

mrteymoorian avatar Oct 04 '24 16:10 mrteymoorian

Hi,

sorry for the delay. Do you happen to have the logs generated by the evaluation for the new models?

fra31 avatar Oct 16 '24 12:10 fra31

Hi, Unfortunately no. I don't have them now.

mrteymoorian avatar Oct 17 '24 19:10 mrteymoorian

Added the new models and evaluations with https://github.com/RobustBench/robustbench/pull/202, please let me know if it's fine for you.

fra31 avatar Dec 20 '24 15:12 fra31

Leaderboard updated with https://github.com/RobustBench/robustbench.github.io/pull/16.

fra31 avatar Feb 05 '25 17:02 fra31