vulnera icon indicating copy to clipboard operation
vulnera copied to clipboard

Provide benchmarks for each available strategy given a dataset of librairies with vulnerabilities

Open antoine-coulon opened this issue 3 years ago • 0 comments

The main idea of @nodesecure/vuln is to expose a set of strategies to detect vulnerabilities within a given project.

In my opinion, it would be great to process some benchmarks for each strategy against a dataset of open-source libraries including well-known to rare vulnerabilities. This would let consumers know the tradeoffs of each @nodesecure/vuln strategy given their project environment and constraints (e.g: npm strategy requires the specific package-lock.json lockfile to be present).

Now that the objective should be clear enough, we must determine three things:

  • [ ] the amount of data we must collect and then use to provide a representative dataset for each strategy
  • [ ] the criteria from which we determine that a strategy has effectively caught a given vulnerability in a library (i.e: do we take into account the vulnerability score ("medium", "high", etc) or we just say that whenever a strategy catches a vulnerability we count it)
  • [ ] the output and format of each benchmark

@fraxken suggested that we could create a /benchmark root directory

antoine-coulon avatar Apr 23 '22 12:04 antoine-coulon