kani
kani copied to clipboard
Create a benchmark workflow
We don't have any reliable mechanism to assess Kani's performance. This makes trade-off assessment very challenging, since the performance cost of any change is unknown.
Requirements:
- The benchmark suite should include real world examples that are not trivial.
- The report should include at least the following stats:
- Total runtime
- Compilation runtime
- Verification runtime
- Peak memory consumption (per process)
- Stats should be broken down on a per example base as well as aggregated.
- Workflow allows the comparison between 2 or more commit IDs.
- If the workflow takes too long to run in every PR, implement a mechanism to trigger the workflow on a PR. (E.g.: via performance label).
- The workflow should use the release build.
Next steps:
- [x] Add more benchmarks under
tests/perf. - [x] Create a workflow that runs all benchmarks with the release build.
- [x] #2442
Follow-up work
Once we have a mechanism to measure performance, we should evaluate some things that may either be low hanging fruit or that we suspect can be improved.
This PR was an initial step in that direction.
Should we resolve this now that #2370 has been merged?
@tautschnig I would like to see some improvements in the report before we close this. I was just talking to @karkhaz about this today.