<div dir="ltr">I am working on a multi-location jitter test (sorry PDV!) and it is showing a lot of promise.<div>For the purposes of reporting jitter, what kind of time measurement horizon is acceptable </div><div>and what is the +/- output actually based on, statistically ?</div><div><br></div><div>For example - is one minute or more of jitter measurements, with the +/- being</div><div>the 2rd std deviation, reasonable or is there some generally accepted definition ?<br><br>ping reports an "mdev" which is<br>SQRT(SUM(RTT*RTT) / N – (SUM(RTT)/N)^2)<br>but I've seen jitter defined as maximum and minimum RTT around the average<br>however that seems very influenced by one outlier measurement.<br><br>thanks<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 7, 2015 at 2:29 PM, Mikael Abrahamsson <span dir="ltr"><<a href="mailto:swmike@swm.pp.se" target="_blank">swmike@swm.pp.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, 6 May 2015, Jonathan Morton wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
So, as a proposed methodology, how does this sound:<br>
<br>
Determine a reasonable ballpark figure for typical codec and jitter-buffer<br>
delay (one way). Fix this as a constant value for the benchmark.<br>
</blockquote>
<br></span>
Commercial grade VoIP systems running in a controlled environment typically (in my experience) come with 40ms PDV (Packet Delay Variation, let's not call it jitter, the timing people get upset if you call it jitter) buffer. These systems typically do not work well over the Internet as we here all know, 40ms is quite low PDV on a FIFO based Internet access. Applications actually designed to work on the Internet have PDV buffers that adapt according to what PDV is seen, and so they can both increase and decrease in size over the time of a call.<br>
<br>
I'd say ballpark reasonable figure for VoIP and video conferencing of reasonable PDV is in the 50-100ms range or so, where lower of course is better. It's basically impossible to have really low PDV on a 1 megabit/s link because a full size 1500 byte packet will take close to 10ms to transmit, but it's perfectly feasable to keep it under 10-20ms when the link speed increases. If we say that 1 megabit/s (typical ADSL up speed)is the lower bound of speed where one can expect VoIP to work together with other Internet traffic, then 50-100ms should be technically attainable if the vendor/operator actually tries to reduce bufferbloat/PDV.<span class=""><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Measure the maximum induced delays in each direction.<br>
</blockquote>
<br></span>
Depending on the length of the test, it might make sense to aim for 95th or 99th percentile, ie throw away the one or few worst values as these might be outliers. But generally I agree with your proposed terminology.<span class="im HOEnZb"><br>
<br>
-- <br>
Mikael Abrahamsson email: <a href="mailto:swmike@swm.pp.se" target="_blank">swmike@swm.pp.se</a><br></span><div class="HOEnZb"><div class="h5">
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</div></div></blockquote></div><br></div>