Performance testing regime
As <model-viewer> has evolved, we have made a number of changes that impact performance in significant ways. For the most part, we are always striving to improve performance. However, it can be difficult to measure performance on any given device. In our case things are further complicated by the fact that performance can regress on unknown devices even when it improves on the devices in front of us (yay GPUs).
This issue proposes that we establish a performance testing regime that can cover the following categories:
- Micro-benchmarks for specific/narrow algorithms and rendering techniques
- Macro-benchmarks for common
<model-viewer>configurations - Multi-device coverage using real hardware
I want to mention that some of our colleagues have been working on a really nice benchmark analysis tool for the web: https://github.com/Polymer/tachometer
We should consider taking advantage of their hard work as we explore this topic.