Feature Request: Ping measurements also during load
Maybe never stop the ping measurement, and let it run continuously also during the upload and download phase. And then have a simple graph, or just the average during the different load scenarios. With this, one could investigate buffer bloat issues very easy!
Even ookla with speedtest.net had this feature for quite some time now:
I'll address this in a future version, but there are known limitations as outlined in https://github.com/openspeedtest/Speed-Test/issues/33. Otherwise, we'd need to overhaul the entire setup and potentially use something like webtransfer. I'll seriously consider this for the next major rewrite, which will support multiple protocols.
When looking at what is transferred during the speedtest.net test, I noticed that the downloads and uploads are also XHR, and the ping measurements are done via a websocket, so I think you are right.
So I did a bit more testing a made small POC with a simple node.js echo websocket server. And then performance.now() on the client side. Turns out Firefox does by default also only provide millisecond accuracy, even with performance.now(). Chromium based browsers, at least, have 100 μs. I think there is still a bit much overhead for extremely accurate results.
So here is what I found:
- The Browser Web socket Implementation adds around 0.1 ms of latency compared to a native rust websocket client when running on the same machine
- Using a Rust websocket-server instead of a node.js server only saves about 0.02 ms
- When being in the same network but another machine, with native rust <->rust websockets I get around 0.3 ms (between 0.24 and 0.4)
- With this setup and the browser implementation client and rust websocket server, I get around 0.9ms.
- Real Latency via ICMP, in this scenario, is about 0.2 ms.
- So this is not ideal, but I am not sure if any other protocol can achieve anything better than that
When you add browser extensions, web filters, antivirus software, and other busy tabs to the mix, you'll see a much higher RTT.
Yes, I think a websocket approach to measure latency similar to:
is probably still the best compromise