Support --benchmarks_filter in the compare.py 'benchmarks' command
Previously compare.py ignored the --benchmarks_filter argument when loading JSON. This defeated any workflow when a single run of the benchmark was run, followed by multiple "subset reports" run against it with the 'benchmarks' command.
Concretely this came up with the simple case:
compare.py benchmarks a.json b.json --benchmarks_filter=BM_Example
This has no practical impact on the 'filters' and 'benchmarkfiltered' comand, which do their thing at a later stage.
Fixes #1484
This re-sends #1486. I deleted the branch in my fork and github didn't allow me to reopen the original PR even after I had recreated it.
This re-sends #1486. I deleted the branch in my fork and github didn't allow me to reopen the original PR even after I had recreated it.
Oh neat, github noticed the PR was based off the same hash and merged them. Still weird that I had to create a new branch name and resend the PR before it realized that I wanted to reopen the original PR.
Looks reasonable. Test?
@matta reverse ping :) I'd prefer to have a test here, but a vague dismissive "nah, take it or leave it" could work too.
Sorry Roman, this slipped my mind. I'm not actively working on the project I was using this for so it is unlikely I'll get to adding a test or significantly tweaking this. Is that vague/dismissive enough? ;-)
@dominichamon do we need a test here?
need is a strong word. i'd always prefer tests to avoid regressions, but i'd also rather have the feature than not. your call, i trust your judgement.
thanks!