<div dir="ltr">I've made some changes and now this test displays the "PDV" column as<div>simply the recent average increase on the best latency seen, as usually the</div><div>best latency seen is pretty stable. (It also should work in firefox too now).</div><div><br></div><div>In addition, every 30 seconds, a grade is printed next to a timestamp.</div><div>I know how we all like grades :) the grade is based on the average of all</div><div>the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail.</div><div><br></div><div>I'm not 100% happy with this PDV figure, a stellar connection - and no internet</div><div>congestion - will show a low number that is stable and an A+ grade. A connection</div><div>with jitter will show a PDV that is half the average jitter amplitude. So far so good.</div><div><br></div><div>But a connection with almost no jitter, but that has visibly higher than minimal</div><div>latency, will show a failing grade. And if this is a jitter / packet delay variation </div><div>type test, I'm not sure about this situation. One could say it is a very good </div><div>connection but because it is 30ms higher than just one revealed optimal</div><div>ping, yet it might get a "D". Not sure how common this state of things could</div><div>be though.</div><div><br></div><div>Also since it is a global test a component of the grade is also internet</div><div>backbone congestion, and not necessarily an ISP or equipment issue.</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 8, 2015 at 9:09 AM, Dave Taht <span dir="ltr"><<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Thu, May 7, 2015 at 3:27 PM, Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br>
</span><div><div class="h5">> On Thu, May 7, 2015 at 7:45 AM, Simon Barber <<a href="mailto:simon@superduper.net">simon@superduper.net</a>> wrote:<br>
>> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th<br>
>> percentile. Voice packets cannot be played out if they are late, so how late<br>
>> they are is the only thing that matters. If many packets are early but more<br>
>> than a very small number are late, then the jitter buffer has to adjust to<br>
>> handle the late packets. Adjusting the jitter buffer disrupts the<br>
>> conversation, so ideally adjustments are infrequent. When maximum latency<br>
>> suddenly increases it becomes necessary to increase the buffer fairly<br>
>> quickly to avoid a dropout in the conversation. Buffer reductions can be<br>
>> hidden by waiting for gaps in conversation. People get used to the acoustic<br>
>> round trip latency and learn how quickly to expect a reply from the other<br>
>> person (unless latency is really too high), but adjustments interfere with<br>
>> this learned expectation, so make it hard to interpret why the other person<br>
>> has paused. Thus adjustments to the buffering should be as infrequent as<br>
>> possible.<br>
>><br>
>> Codel measures and tracks minimum latency in its inner 'interval' loop. For<br>
>> VoIP the maximum is what counts. You can call it minimum + jitter, but the<br>
>> maximum is the important thing (not the absolute maximum, since a very small<br>
>> number of late packets are tolerable, but perhaps the 99th percentile).<br>
>><br>
>> During a conversation it will take some time at the start to learn the<br>
>> characteristics of the link, but ideally the jitter buffer algorithm will<br>
>> quickly get to a place where few adjustments are made. The more conservative<br>
>> the buffer (higher delay above minimum) the less likely a future adjustment<br>
>> will be needed, hence a tendency towards larger buffers (and more delay).<br>
>><br>
>> Priority queueing is perfect for VoIP, since it can keep the jitter at a<br>
>> single hop down to the transmission time for a single maximum size packet.<br>
>> Fair Queueing will often achieve the same thing, since VoIP streams are<br>
>> often the lowest bandwidth ongoing stream on the link.<br>
><br>
> Unfortunately this is more nuanced than this. Not for the first time<br>
> do I wish that email contained math, and/or we had put together a paper<br>
> for this containing the relevant math. I do have a spreadsheet lying<br>
> around here somewhere...<br>
><br>
> In the case of a drop tail queue, jitter is a function of the total<br>
> amount of data outstanding on the link by all the flows. A single<br>
> big fat flow experiencing a drop will drop it's buffer occupancy<br>
> (and thus effect on other flows) by a lot on the next RTT. However<br>
> a lot of fat flows will drop by less if drops are few. Total delay<br>
> is the sum of all packets outstanding on the link.<br>
><br>
> In the case of stochastic packet-fair queuing jitter is a function<br>
> of the total number of bytes in each packet outstanding on the sum<br>
> of the total number of flows. The total delay is the sum of the<br>
> bytes delivered per packet per flow.<br>
><br>
> In the case of DRR, jitter is a function of the total number of bytes<br>
> allowed by the quantum per flow outstanding on the link. The total<br>
> delay experienced by the flow is a function of the amounts of<br>
> bytes delivered with the number of flows.<br>
><br>
> In the case of fq_codel, jitter is a function of of the total number<br>
> of bytes allowed by the quantum per flow outstanding on the link,<br>
> with the sparse optimization pushing flows with no queue<br>
> queue in the available window to the front. Furthermore<br>
> codel acts to shorten the lengths of the queues overall.<br>
><br>
> fq_codel's delay: when the arriving in new flow packet can be serviced<br>
> in less time than the total number of flows' quantums, is a function<br>
> of the total number of flows that are not also building queues. When<br>
> the total service time for all flows exceeds the interval the voip<br>
> packet is delivered in, and AND the quantum under which the algorithm<br>
> is delivering, fq_codel degrades to DRR behavior. (in other words,<br>
> given enough queuing flows or enough new flows, you can steadily<br>
> accrue delay on a voip flow under fq_codel). Predicting jitter is<br>
> really hard to do here, but still pretty minimal compared to the<br>
> alternatives above.<br>
<br>
</div></div>And to complexify it further if the total flows' service time exceeds<br>
the interval on which the voip flow is being delivered, the voip flow<br>
can deliver a fq_codel quantum's worth of packets back to back.<br>
<br>
Boy I wish I could explain all this better, and/or observe the results<br>
on real jitter buffers in real apps.<br>
<span class=""><br>
><br>
> in the above 3 cases, hash collisions permute the result. Cake and<br>
> fq_pie have a lot less collisions.<br>
<br>
</span>Which is not necessarily a panacea either. perfect flow isolation<br>
(cake) to hundreds of flows might be in some cases worse that<br>
suffering hash collisions (fq_codel) for some workloads. sch_fq and<br>
fq_pie have *perfect* flow isolation and I worry about the effects of<br>
tons and tons of short flows (think ddos attacks) - I am comforted by<br>
colliisions! and tend to think there is an ideal ratio of flows<br>
allowed without queue management verses available bandwidth that we<br>
don't know yet - as well as think for larger numbers of flows we<br>
should be inheriting more global environmental (state of the link and<br>
all queues) than we currently do in initializing both cake and<br>
fq_codel queues.<br>
<br>
Recently I did some tests of 450+ flows (details on the cake mailing<br>
list) against sch_fq which got hopelessly buried (10000 packets in<br>
queue). cake and fq_pie did a lot better.<br>
<div class="HOEnZb"><div class="h5"><br>
> I am generally sanguine about this along the edge - from the internet<br>
> packets cannot be easily classified, yet most edge networks have more<br>
> bandwidth from that direction, thus able to fit WAY more flows in<br>
> under 10ms, and outbound, from the home or small business, some<br>
> classification can be effectively used in a X tier shaper (or cake) to<br>
> ensure better priority (still with fair) queuing for this special<br>
> class of application - not that under most home workloads that this is<br>
> an issue. We think. We really need to do more benchmarking of web and<br>
> dash traffic loads.<br>
><br>
>> Simon<br>
>><br>
>> Sent with AquaMail for Android<br>
>> <a href="http://www.aqua-mail.com" target="_blank">http://www.aqua-mail.com</a><br>
>><br>
>> On May 7, 2015 6:16:00 AM jb <<a href="mailto:justin@dslr.net">justin@dslr.net</a>> wrote:<br>
>>><br>
>>> I thought would be more sane too. I see mentioned online that PDV is a<br>
>>> gaussian distribution (around mean) but it looks more like half a bell<br>
>>> curve, with most numbers near the the lowest latency seen, and getting<br>
>>> progressively worse with<br>
>>> less frequency.<br>
>>> At least for DSL connections on good ISPs that scenario seems more<br>
>>> frequent.<br>
>>> You "usually" get the best latency and "sometimes" get spikes or fuzz on<br>
>>> top of it.<br>
>>><br>
>>> by the way after I posted I discovered Firefox has an issue with this test<br>
>>> so I had<br>
>>> to block it with a message, my apologies if anyone wasted time trying it<br>
>>> with FF.<br>
>>> Hopefully i can figure out why.<br>
>>><br>
>>><br>
>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <<a href="mailto:swmike@swm.pp.se">swmike@swm.pp.se</a>><br>
>>> wrote:<br>
>>>><br>
>>>> On Thu, 7 May 2015, jb wrote:<br>
>>>><br>
>>>>> There is a web socket based jitter tester now. It is very early stage<br>
>>>>> but<br>
>>>>> works ok.<br>
>>>>><br>
>>>>> <a href="http://www.dslreports.com/speedtest?radar=1" target="_blank">http://www.dslreports.com/speedtest?radar=1</a><br>
>>>>><br>
>>>>> So the latency displayed is the mean latency from a rolling 60 sample<br>
>>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean<br>
>>>>> difference between sequential pings in that same rolling buffer. It is quite<br>
>>>>> similar to the std.dev actually (not shown).<br>
>>>><br>
>>>><br>
>>>> So I think there are two schools here, either you take average and<br>
>>>> display + / - from that, but I think I prefer to take the lowest of the last<br>
>>>> 100 samples (or something), and then display PDV from that "floor" value, ie<br>
>>>> PDV can't ever be negative, it can only be positive.<br>
>>>><br>
>>>> Apart from that, the above multi-place RTT test is really really nice,<br>
>>>> thanks for doing this!<br>
>>>><br>
>>>><br>
>>>> --<br>
>>>> Mikael Abrahamsson email: <a href="mailto:swmike@swm.pp.se">swmike@swm.pp.se</a><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Bloat mailing list<br>
>>> <a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
>>> <a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
>>><br>
>><br>
>> _______________________________________________<br>
>> Bloat mailing list<br>
>> <a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
>> <a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
>><br>
><br>
><br>
><br>
> --<br>
> Dave Täht<br>
> Open Networking needs **Open Source Hardware**<br>
><br>
> <a href="https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67" target="_blank">https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67</a><br>
<br>
<br>
<br>
--<br>
Dave Täht<br>
Open Networking needs **Open Source Hardware**<br>
<br>
<a href="https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67" target="_blank">https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67</a><br>
</div></div></blockquote></div><br></div>