runwasi
runwasi copied to clipboard
Benchmarking
This issue is served as a place for discussing various ways / ideas we can benchmark runwasi the wasm shims that was proposed by @ipuustin
- One idea is that we can write a simple wasm program (Fibonacci) and execute in
runwasiand a native program executing inrunc. This provides a base benchmark of comparising the performance of wasi program vs. native runc processes. It is not meant to benchmark the performance of WASI in general. - Having the base benchmark set, we can observe the performance difference for each version increments. For example, we can observe how much speed increase / descrease for version 0.2 vs. 0.3
- Another idea of benchmarking is testing how "dense" of wasm pods can we go for a node. It is often advertised that wasm modules can increase CPU utilization and thus increasing the density of running pods per node. We can verify this point to push the containerd runtime to the extreme by running thousands of pods at the same time.
Feel free to add ideas and thoughts on this topic! Any suggestion is welcome 🙏
- [ ] #612
- [ ] #620
- [ ] #621
- [x] review benched used in the bench project (https://github.com/containerd/runwasi/pull/612#pullrequestreview-2108501090)
- [ ] #626
- [ ] #613
- [ ] #614
- [ ] #615
- [ ] reach out to the CNCF containerd maintainers and ask what managed services can we use for monitoring
- [ ] #616
Close this one as #126 adds benchmarking support.
Wanted to reopen this issue because I think #126 does not fully address the scope of the issue mentioned above. A few things that will make runwasi benchmarking stories better
- add #126 to the CI
- run runc and crun in the CI to give comparison results
- dense test
Check out SpinKube perf test: https://github.com/fermyon/spinkube-performance
TODO: look into grafana and prometheous options