[Bloat] DSLReports Speed Test has latency measurement built-in
Dave Taht
dave.taht at gmail.com
Thu May 7 18:27:28 EDT 2015
On Thu, May 7, 2015 at 7:45 AM, Simon Barber <simon at superduper.net> wrote:
> The key figure for VoIP is maximum latency, or perhaps somewhere around 99th
> percentile. Voice packets cannot be played out if they are late, so how late
> they are is the only thing that matters. If many packets are early but more
> than a very small number are late, then the jitter buffer has to adjust to
> handle the late packets. Adjusting the jitter buffer disrupts the
> conversation, so ideally adjustments are infrequent. When maximum latency
> suddenly increases it becomes necessary to increase the buffer fairly
> quickly to avoid a dropout in the conversation. Buffer reductions can be
> hidden by waiting for gaps in conversation. People get used to the acoustic
> round trip latency and learn how quickly to expect a reply from the other
> person (unless latency is really too high), but adjustments interfere with
> this learned expectation, so make it hard to interpret why the other person
> has paused. Thus adjustments to the buffering should be as infrequent as
> possible.
>
> Codel measures and tracks minimum latency in its inner 'interval' loop. For
> VoIP the maximum is what counts. You can call it minimum + jitter, but the
> maximum is the important thing (not the absolute maximum, since a very small
> number of late packets are tolerable, but perhaps the 99th percentile).
>
> During a conversation it will take some time at the start to learn the
> characteristics of the link, but ideally the jitter buffer algorithm will
> quickly get to a place where few adjustments are made. The more conservative
> the buffer (higher delay above minimum) the less likely a future adjustment
> will be needed, hence a tendency towards larger buffers (and more delay).
>
> Priority queueing is perfect for VoIP, since it can keep the jitter at a
> single hop down to the transmission time for a single maximum size packet.
> Fair Queueing will often achieve the same thing, since VoIP streams are
> often the lowest bandwidth ongoing stream on the link.
Unfortunately this is more nuanced than this. Not for the first time
do I wish that email contained math, and/or we had put together a paper
for this containing the relevant math. I do have a spreadsheet lying
around here somewhere...
In the case of a drop tail queue, jitter is a function of the total
amount of data outstanding on the link by all the flows. A single
big fat flow experiencing a drop will drop it's buffer occupancy
(and thus effect on other flows) by a lot on the next RTT. However
a lot of fat flows will drop by less if drops are few. Total delay
is the sum of all packets outstanding on the link.
In the case of stochastic packet-fair queuing jitter is a function
of the total number of bytes in each packet outstanding on the sum
of the total number of flows. The total delay is the sum of the
bytes delivered per packet per flow.
In the case of DRR, jitter is a function of the total number of bytes
allowed by the quantum per flow outstanding on the link. The total
delay experienced by the flow is a function of the amounts of
bytes delivered with the number of flows.
In the case of fq_codel, jitter is a function of of the total number
of bytes allowed by the quantum per flow outstanding on the link,
with the sparse optimization pushing flows with no queue
queue in the available window to the front. Furthermore
codel acts to shorten the lengths of the queues overall.
fq_codel's delay: when the arriving in new flow packet can be serviced
in less time than the total number of flows' quantums, is a function
of the total number of flows that are not also building queues. When
the total service time for all flows exceeds the interval the voip
packet is delivered in, and AND the quantum under which the algorithm
is delivering, fq_codel degrades to DRR behavior. (in other words,
given enough queuing flows or enough new flows, you can steadily
accrue delay on a voip flow under fq_codel). Predicting jitter is
really hard to do here, but still pretty minimal compared to the
alternatives above.
in the above 3 cases, hash collisions permute the result. Cake and
fq_pie have a lot less collisions.
I am generally sanguine about this along the edge - from the internet
packets cannot be easily classified, yet most edge networks have more
bandwidth from that direction, thus able to fit WAY more flows in
under 10ms, and outbound, from the home or small business, some
classification can be effectively used in a X tier shaper (or cake) to
ensure better priority (still with fair) queuing for this special
class of application - not that under most home workloads that this is
an issue. We think. We really need to do more benchmarking of web and
dash traffic loads.
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
> On May 7, 2015 6:16:00 AM jb <justin at dslr.net> wrote:
>>
>> I thought would be more sane too. I see mentioned online that PDV is a
>> gaussian distribution (around mean) but it looks more like half a bell
>> curve, with most numbers near the the lowest latency seen, and getting
>> progressively worse with
>> less frequency.
>> At least for DSL connections on good ISPs that scenario seems more
>> frequent.
>> You "usually" get the best latency and "sometimes" get spikes or fuzz on
>> top of it.
>>
>> by the way after I posted I discovered Firefox has an issue with this test
>> so I had
>> to block it with a message, my apologies if anyone wasted time trying it
>> with FF.
>> Hopefully i can figure out why.
>>
>>
>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike at swm.pp.se>
>> wrote:
>>>
>>> On Thu, 7 May 2015, jb wrote:
>>>
>>>> There is a web socket based jitter tester now. It is very early stage
>>>> but
>>>> works ok.
>>>>
>>>> http://www.dslreports.com/speedtest?radar=1
>>>>
>>>> So the latency displayed is the mean latency from a rolling 60 sample
>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is the mean
>>>> difference between sequential pings in that same rolling buffer. It is quite
>>>> similar to the std.dev actually (not shown).
>>>
>>>
>>> So I think there are two schools here, either you take average and
>>> display + / - from that, but I think I prefer to take the lowest of the last
>>> 100 samples (or something), and then display PDV from that "floor" value, ie
>>> PDV can't ever be negative, it can only be positive.
>>>
>>> Apart from that, the above multi-place RTT test is really really nice,
>>> thanks for doing this!
>>>
>>>
>>> --
>>> Mikael Abrahamsson email: swmike at swm.pp.se
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Open Networking needs **Open Source Hardware**
https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
More information about the Bloat
mailing list