Luca Demetrio

Results 21 issues of Luca Demetrio

**Is your feature request related to a problem? Please describe.** I would need to obtain the results of the builder directly in memory and not storing it into a file...

Include wrappers for [QuoVadis](https://github.com/dtrizna/quo.vadis), by leveraging the fusion models provided inside the original repository by @dtrizna. It should be implemented as a `CQuoVadisClassifier` and its wrapper for blackbox attack `CQuoVadisWrapperPhi`....

help wanted

Remove all the debug prints, and use a standard logger that can be customized with a config file or at import time.

enhancement
help wanted

As a new feature, it would be interesting to port the perturbations proposed by [Lucas et al.](https://github.com/pwwl/enhanced-binary-diversification) in their [research paper](https://dl.acm.org/doi/10.1145/3433210.3453086). Maybe, SecML Malware could use this repository as a...

enhancement
help wanted

**Describe the bug** When applying the `CContentShiftingEvasion` to a binary, if it has been compiled for debug, the manipulation corrupts the file. **To Reproduce** Compile an executable on Windows, using...

bug
help wanted

In order to better automate the testing part, the library should use some mock classifier to be used to test the algorithms (rather than using real malware / real networks).

enhancement
help wanted

**High Level Description** We would like to retrieve information of every car (not ego-cars) that is currently inside the simulation, in terms of their internal state (like the speed, the...

help wanted

**High Level Description** Fine-tune the "RL-Agent" provided inside the model zoo, by specifying a new OpenAI Gym environment with customly-defined rewards. We tried several configurations, from understanding how to load...

help wanted

## **BUG REPORT** While training the RLLib examples, the simulation crashes after some time, and it is unable to resume. The provided error log is not informative enough to determine...

bug

**High Level Description** We would like to setup the initial position of reinforcement learning agents inside a chosen scenario, so that simulations always start in the same way from one...

help wanted