Document how to run benchmarks
Hi!
I'm wondering how the performance and stack usage benchmark output seen in some PR comments can be generated. I noticed https://github.com/jerryscript-project/jerryscript/blob/8edf8d6eea4327dd83b7fabddcae4ea23bf98fb9/tools/runners/run-benchmarks.sh which seems related, but I'm not sure where the referenced files with the test cases are.
I'm interested in measuring the collective performance improvement between an older commit and the current master commit to understand the improvements made across time.
Thanks! Liam
Hi @Hexxeh!
The bench results you see in several PRs are created with an internal benchmark system runs on an RPi2. TBH I don't know much about the run-benchmark.sh but let me cc @galpeter or @bzsolt .
Hi!
Currently I can't make any effort to deal with this issue.
In the tools directory there are a few outdated tool:
-
run-perf-test.sh -
run-mem-stats-test.sh
as I know they require some patch to run again.
... or may we can replace all of them with a Python-variant (e.g. benchmark.py).
What does it really need?
- 1 (or 2) runnable (built with the desired features: mem-stats, stack info)
- the folder of the existing testcases (e.g. the sunspider benchmark, kraken, octane, etc.)
- specify what would like to measure (perf, stack, heap)
- format of the output (json, markdown-table)
How?
- perf: e.g. with the
timeitpackage - heap: processing the output of the builtin mem-stats feature
- stack: in main-unix-test.c there is an example for that, then process the output
Mostly this is a text processing task.
I hope it helps!
Are those testcase JS files you mentioned available, for somebody to replicate the timing scripts etc.