Benchmarking incremental builds
Description of the problem / feature request:
Currently bazel-bench isn't able to benchmark incremental builds due to the following reasons:
- Does not allow running
--warm_up=nruns prior to the benchmark--runs=n - Does not allow toggling
--shutdownand--cleanto allow us to toggle whether or not to runblaze shutdown / clean(currently runs both by default). - Does not allow patching of code, should be done via specifying a
--patch_file.
Feature requests: what underlying problem are you trying to solve with this feature?
Allows users to benchmark the incremental case, by:
- Specifying
--warm_upruns to warm up the blaze cache. - Do not
shutdownorcleanin between runs so that we run with a warm blaze cache. - Apply the patch file in-between runs to simulate code changes.
@zhengwei143 having a patch file will work on the first run, and the result will be cached. The subsequent run, will return cached result.
that implies either having multiple patch files (based on the value of runs N), OR instead of --patch_file have a --patch_cmd, and try to change some file dynamically (based on current timestamp, or smth similar).
EDIT: Sorry, I was confused. We can specify patch_file for each unit (and not at the global level). Ignore this comment.
I'm also working on improving incremental build times and am trying to benchmark the impact of a change on a single target / action. That target has a bunch of dependencies, which bazel-bench will recompile every time, and I don't want to measure those.
This is a documented problem, and the workarounds are not pretty.
That article suggests using --subcommands to get at the underlying command and benchmark that manually, but setting up the environment can be a blocker. For example, I'm trying to benchmark a scalac command, and I can't figure out what JAVA_RUNFILES should be (and the command doesn't print it).
I'm currently using hyperfine to run the benchmark, with a command that randomly modifies a source file before each run to bust the cache, but needing to construct that command for each benchmark adds friction.
hyperfine \
--shell=bash \
--prepare='sed -i "381s/.*/$RANDOM/" path/to/source_file' \
--cleanup='sed -i "381s/.*/original contents/" path/to/source_file' \
--warmup 2 \
'bazel build //path/to/my:target'
The --patch_file solution discussed above would have this same friction.
We can specify patch_file for each unit (and not at the global level). Ignore this comment.
@snazhmudinov I don't think this works because we want to build each unit multiple times, so subsequent builds per unit would be cached even with a per-unit patch. Is that correct?
Ideally I'd like bazel / bazel-bench to be able to invalidate the cache for a single target only.