Jitter time always shows zero, and request for option of number of decimal places for Jitter and Latency
Hi I have noticed that the jitter value always shows as zero. This is over an Internet connection with lots of Jitter. Other speed test websites show high amounts of jitter. I figure that there is an issue with it. Also, is it possible to put in an option within index.html for configuring the number of displayed digits after the decimal place for Latency between 0 and 2 digits, and the same for the Jitter?
Zero Jitter = Very good connection.
If you get ping 7ms two times, that means 7-7=0 ms jitter
Jitter shows the stability of your connection, if you get 7 and 6.9 ms ping , jitter will be 0.1. we will report 0.1 as jiitter. If you can add artificial jitter to a connection, you can test this. If you run this on a congested 4G or 3G connection you will see jitter values more than 5ms most of the time.
Thanks for your response. I really don't think you have taken the time to understand my issue above. I know what jitter is [sigh] The problem is that using your software the JITTER ALWAYS SHOWS ZERO. ALWAYS.
Even over links of high jitter with a high variance in ping times such as 4G. IT ALWAYS SHOWS ZERO. ALWAYS.
Hence I've raised the issue.
FYI I had increased the number of pings a the start of the test to 100 in the index.html file. I am suspecting that your calculation of Jitter might not be taking this into account correctly.
Increased sample is not a problem, i think it's always comparing the difference between the last ping value, so if you got 20ms ping two times in a test, 20-20 = 0 will be the least value. That will be reported. Anyway I will check this behaviour soon and post an update.
so if you got 20ms ping two times in a test, 20-20 = 0 will be the least value. That will be reported.
It would be more useful to see the jitter value calculated as an average jitter over the number of pings in the initial ping test. So for example if number of pings = 100, then the jitter between each consecutive packet is summed up and then divided by 99 at the end to produce the one average jitter value.
Yes, we need to find a better way to report jitter. https://www.youtube.com/watch?v=zUTVJLU6W0M As you can see from the video when we send 100 pings, the high jitter is over 2400ms, and the low is 0. So an average between both will be highly inaccurate. So maybe we need to calculate SMA or something else.
(20 Samples) [0, 0.1, 0.1, 0.2, 0.2, 0.2, 0.3, 0.4, 0.5, 0.5, 0.7, 0.7, 1.3, 1.4, 2, 4.5, 5.8, 8.3, 8.9, 12.9] (10 samples) [0, 0.1, 0.1, 0.2, 0.2, 0.2, 0.3, 0.4, 0.5, 0.5] [50% of samples with least values] sum/length of 10 sample 2.5/10=0.25

As far as I am concerned the idea of the average jitter measurement is to average across all values, not a percentile of. But if you wanted to include a percentile variable in the code that the user can change to choose the x% least jitter values in the calculation, that's probably not a bad idea. Although if you ask me, it is an unnecessary complication and calculation for your code. I figured the only delay variation values that should be excluded are when there is a missed ping response, for example. Let's say you are half way through the pings and one drops, in this case you would need to exclude two jitter values from the calculation and reduce the total samples by two. But this means each ping has to be identifiable with a sequence number so you know which packet was dropped. I guess that does start to complicate things since you are probably not identifying packet sequence numbers, or are you?
We cannot identify a drop, because TCP will hide that info. But we can track errors, but it's useless. I can provide a variable in index.html where you can adjust the sample size.
var jitterFinalSample = 0.5; // 1 = 100%, 0.5 = 50%, 0.1 = 10% like that
OK that makes it difficult. Jitter sample size percentage is a fair compromise then. Thanks for your work on this.
OpenSpeedTest updated to 2.5.3
- improved jitter reporting.
- run a speed test by hitting Enter Key from Keyboard