Maura Pintor
Maura Pintor
**High Level Description** We want to test the RL agent from the model zoo on a scenario of choice by using one of the example scripts. **Desired SMARTS version** 0.5.1...
We can add https://github.com/jeromerony/adversarial-library as it contains efficient implementations of existing attacks. See other wrappers for examples of implementation, e.g., https://github.com/pralab/secml-torch/blob/295aa243628e808bfba92a0dd158d49339d3a68a/src/secmlt/adv/evasion/foolbox_attacks/foolbox_base.py#L14 Referenecs: * the main Foolbox wrapper, e.g., https://github.com/pralab/secml-torch/blob/295aa243628e808bfba92a0dd158d49339d3a68a/src/secmlt/adv/evasion/foolbox_attacks/foolbox_base.py *...
Other attacks can be implemented with the native backend. Possibly, the implementation should allow custom choice of loss, optimizer, and other customizable components. Open an issue and a separate branch...
We can add https://github.com/jeromerony/adversarial-library as it contains efficient implementations of existing attacks. Referenecs: * the main Foolbox wrapper https://github.com/pralab/secml-torch/blob/295aa243628e808bfba92a0dd158d49339d3a68a/src/secmlt/adv/evasion/foolbox_attacks/foolbox_base.py#L14 * the wrapped PGD attack https://github.com/pralab/secml-torch/blob/main/src/secmlt/adv/evasion/foolbox_attacks/foolbox_pgd.py * the implementation of PGD...
We should avoid creating lists inside data loader. https://github.com/pralab/secml2/blob/db5d9c05250076a324d8493d8384d00d884c0b59/secml2/adv/evasion/base_evasion_attack.py#L50C9-L51C29
## Changelog Approve after #104 * Pass input learning rate to optimizer * Refactor input optimizer kwargs for modular attack to be explicitly assigned to a dictionary rather than passed...
The parameters passed to the run should be explicitly passed as optimizer kwargs incapsulated as a dictionary, rather than passing whichever additional input to the optimizer. This also allow to...
The step size exposed in https://github.com/pralab/secml-torch/blob/674b2bcda04f6d1365616bdb9e97eac3a06d21b1/src/secmlt/adv/evasion/modular_attack.py#L31-L45 Is not used in: https://github.com/pralab/secml-torch/blob/674b2bcda04f6d1365616bdb9e97eac3a06d21b1/src/secmlt/adv/evasion/modular_attack.py#L201
## Changelog * New input space constraint that reverts the preprocessing and can be applied to the input space * Quantization constraint fixed * Adversarial library now is installed from...
A new constraint class should be defined. Specifically, when the input uses some sort of preprocessing, $x = f(z),$ the current constraint can only be applied on $x$. We need...