From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-x229.google.com (mail-ig0-x229.google.com [IPv6:2607:f8b0:4001:c05::229]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 4DEEF21F384 for ; Thu, 7 May 2015 19:05:14 -0700 (PDT) Received: by iget9 with SMTP id t9so22544925ige.1 for ; Thu, 07 May 2015 19:05:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:content-type; bh=ylw1M1Znwz1/a3uEpOKl1rWdPP4gKqENON8EdJpKFyc=; b=OQWmprj7C1cSajvhLYaePdILPiRVTGGt8PhKcOw7ELv4Q+bw81pSHRpJaH3dDYmB7y YeZKvYPe45gdMxyctjF2Vd4vuh7UPQJXV/dV9RgODzKALKJ/Vff6EN5j78pc5fntgyUc iv8ItXoYpFDxJMJS/FcCpOGpK7/omFbLW8Dy8Q7PZhksBzephIvcFhSt7XIbkg3FtGmP EZn4kRu15pM3TRGQK4tsajJoyNqVRM+5vqhE1n7hzi4g1UYhPAgihdxNVvcHiKou0n/W Ve9rf4ax6bN+65MjR7beHiiqYJnQ2jfspl/rP1RWUIqMyPICcDzuCoM40canzIt9ru7v 4LIw== MIME-Version: 1.0 X-Received: by 10.50.43.227 with SMTP id z3mr1481136igl.22.1431050713365; Thu, 07 May 2015 19:05:13 -0700 (PDT) Sender: justinbeech@gmail.com Received: by 10.50.107.42 with HTTP; Thu, 7 May 2015 19:05:13 -0700 (PDT) In-Reply-To: References: <2288B614-B415-4017-A842-76E8F5DFDE4C@gmx.de> <553B06CE.1050209@superduper.net> <14ceed3c818.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> <0C930D43-A05B-48E2-BC01-792CAA72CAD1@gmx.de> <5549A1B8.50005@superduper.net> <9CF0E173-2CE5-4950-84D1-44EAEF174882@gmx.de> <14d2ed6f160.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> Date: Fri, 8 May 2015 12:05:13 +1000 X-Google-Sender-Auth: KrouewTDa4wCdxDGboXOwfKTPRI Message-ID: From: jb To: bloat Content-Type: multipart/alternative; boundary=089e0111c0166d016e0515887527 Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 May 2015 02:05:43 -0000 --089e0111c0166d016e0515887527 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I've made some changes and now this test displays the "PDV" column as simply the recent average increase on the best latency seen, as usually the best latency seen is pretty stable. (It also should work in firefox too now). In addition, every 30 seconds, a grade is printed next to a timestamp. I know how we all like grades :) the grade is based on the average of all the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail. I'm not 100% happy with this PDV figure, a stellar connection - and no internet congestion - will show a low number that is stable and an A+ grade. A connection with jitter will show a PDV that is half the average jitter amplitude. So far so good. But a connection with almost no jitter, but that has visibly higher than minimal latency, will show a failing grade. And if this is a jitter / packet delay variation type test, I'm not sure about this situation. One could say it is a very good connection but because it is 30ms higher than just one revealed optimal ping, yet it might get a "D". Not sure how common this state of things coul= d be though. Also since it is a global test a component of the grade is also internet backbone congestion, and not necessarily an ISP or equipment issue. On Fri, May 8, 2015 at 9:09 AM, Dave Taht wrote: > On Thu, May 7, 2015 at 3:27 PM, Dave Taht wrote: > > On Thu, May 7, 2015 at 7:45 AM, Simon Barber > wrote: > >> The key figure for VoIP is maximum latency, or perhaps somewhere aroun= d > 99th > >> percentile. Voice packets cannot be played out if they are late, so ho= w > late > >> they are is the only thing that matters. If many packets are early but > more > >> than a very small number are late, then the jitter buffer has to adjus= t > to > >> handle the late packets. Adjusting the jitter buffer disrupts the > >> conversation, so ideally adjustments are infrequent. When maximum > latency > >> suddenly increases it becomes necessary to increase the buffer fairly > >> quickly to avoid a dropout in the conversation. Buffer reductions can = be > >> hidden by waiting for gaps in conversation. People get used to the > acoustic > >> round trip latency and learn how quickly to expect a reply from the > other > >> person (unless latency is really too high), but adjustments interfere > with > >> this learned expectation, so make it hard to interpret why the other > person > >> has paused. Thus adjustments to the buffering should be as infrequent = as > >> possible. > >> > >> Codel measures and tracks minimum latency in its inner 'interval' loop= . > For > >> VoIP the maximum is what counts. You can call it minimum + jitter, but > the > >> maximum is the important thing (not the absolute maximum, since a very > small > >> number of late packets are tolerable, but perhaps the 99th percentile)= . > >> > >> During a conversation it will take some time at the start to learn the > >> characteristics of the link, but ideally the jitter buffer algorithm > will > >> quickly get to a place where few adjustments are made. The more > conservative > >> the buffer (higher delay above minimum) the less likely a future > adjustment > >> will be needed, hence a tendency towards larger buffers (and more > delay). > >> > >> Priority queueing is perfect for VoIP, since it can keep the jitter at= a > >> single hop down to the transmission time for a single maximum size > packet. > >> Fair Queueing will often achieve the same thing, since VoIP streams ar= e > >> often the lowest bandwidth ongoing stream on the link. > > > > Unfortunately this is more nuanced than this. Not for the first time > > do I wish that email contained math, and/or we had put together a paper > > for this containing the relevant math. I do have a spreadsheet lying > > around here somewhere... > > > > In the case of a drop tail queue, jitter is a function of the total > > amount of data outstanding on the link by all the flows. A single > > big fat flow experiencing a drop will drop it's buffer occupancy > > (and thus effect on other flows) by a lot on the next RTT. However > > a lot of fat flows will drop by less if drops are few. Total delay > > is the sum of all packets outstanding on the link. > > > > In the case of stochastic packet-fair queuing jitter is a function > > of the total number of bytes in each packet outstanding on the sum > > of the total number of flows. The total delay is the sum of the > > bytes delivered per packet per flow. > > > > In the case of DRR, jitter is a function of the total number of bytes > > allowed by the quantum per flow outstanding on the link. The total > > delay experienced by the flow is a function of the amounts of > > bytes delivered with the number of flows. > > > > In the case of fq_codel, jitter is a function of of the total number > > of bytes allowed by the quantum per flow outstanding on the link, > > with the sparse optimization pushing flows with no queue > > queue in the available window to the front. Furthermore > > codel acts to shorten the lengths of the queues overall. > > > > fq_codel's delay: when the arriving in new flow packet can be serviced > > in less time than the total number of flows' quantums, is a function > > of the total number of flows that are not also building queues. When > > the total service time for all flows exceeds the interval the voip > > packet is delivered in, and AND the quantum under which the algorithm > > is delivering, fq_codel degrades to DRR behavior. (in other words, > > given enough queuing flows or enough new flows, you can steadily > > accrue delay on a voip flow under fq_codel). Predicting jitter is > > really hard to do here, but still pretty minimal compared to the > > alternatives above. > > And to complexify it further if the total flows' service time exceeds > the interval on which the voip flow is being delivered, the voip flow > can deliver a fq_codel quantum's worth of packets back to back. > > Boy I wish I could explain all this better, and/or observe the results > on real jitter buffers in real apps. > > > > > in the above 3 cases, hash collisions permute the result. Cake and > > fq_pie have a lot less collisions. > > Which is not necessarily a panacea either. perfect flow isolation > (cake) to hundreds of flows might be in some cases worse that > suffering hash collisions (fq_codel) for some workloads. sch_fq and > fq_pie have *perfect* flow isolation and I worry about the effects of > tons and tons of short flows (think ddos attacks) - I am comforted by > colliisions! and tend to think there is an ideal ratio of flows > allowed without queue management verses available bandwidth that we > don't know yet - as well as think for larger numbers of flows we > should be inheriting more global environmental (state of the link and > all queues) than we currently do in initializing both cake and > fq_codel queues. > > Recently I did some tests of 450+ flows (details on the cake mailing > list) against sch_fq which got hopelessly buried (10000 packets in > queue). cake and fq_pie did a lot better. > > > I am generally sanguine about this along the edge - from the internet > > packets cannot be easily classified, yet most edge networks have more > > bandwidth from that direction, thus able to fit WAY more flows in > > under 10ms, and outbound, from the home or small business, some > > classification can be effectively used in a X tier shaper (or cake) to > > ensure better priority (still with fair) queuing for this special > > class of application - not that under most home workloads that this is > > an issue. We think. We really need to do more benchmarking of web and > > dash traffic loads. > > > >> Simon > >> > >> Sent with AquaMail for Android > >> http://www.aqua-mail.com > >> > >> On May 7, 2015 6:16:00 AM jb wrote: > >>> > >>> I thought would be more sane too. I see mentioned online that PDV is = a > >>> gaussian distribution (around mean) but it looks more like half a bel= l > >>> curve, with most numbers near the the lowest latency seen, and gettin= g > >>> progressively worse with > >>> less frequency. > >>> At least for DSL connections on good ISPs that scenario seems more > >>> frequent. > >>> You "usually" get the best latency and "sometimes" get spikes or fuzz > on > >>> top of it. > >>> > >>> by the way after I posted I discovered Firefox has an issue with this > test > >>> so I had > >>> to block it with a message, my apologies if anyone wasted time trying > it > >>> with FF. > >>> Hopefully i can figure out why. > >>> > >>> > >>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson > >>> wrote: > >>>> > >>>> On Thu, 7 May 2015, jb wrote: > >>>> > >>>>> There is a web socket based jitter tester now. It is very early sta= ge > >>>>> but > >>>>> works ok. > >>>>> > >>>>> http://www.dslreports.com/speedtest?radar=3D1 > >>>>> > >>>>> So the latency displayed is the mean latency from a rolling 60 samp= le > >>>>> buffer, Minimum latency is also displayed. and the +/- PDV value is > the mean > >>>>> difference between sequential pings in that same rolling buffer. It > is quite > >>>>> similar to the std.dev actually (not shown). > >>>> > >>>> > >>>> So I think there are two schools here, either you take average and > >>>> display + / - from that, but I think I prefer to take the lowest of > the last > >>>> 100 samples (or something), and then display PDV from that "floor" > value, ie > >>>> PDV can't ever be negative, it can only be positive. > >>>> > >>>> Apart from that, the above multi-place RTT test is really really nic= e, > >>>> thanks for doing this! > >>>> > >>>> > >>>> -- > >>>> Mikael Abrahamsson email: swmike@swm.pp.se > >>> > >>> > >>> _______________________________________________ > >>> Bloat mailing list > >>> Bloat@lists.bufferbloat.net > >>> https://lists.bufferbloat.net/listinfo/bloat > >>> > >> > >> _______________________________________________ > >> Bloat mailing list > >> Bloat@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/bloat > >> > > > > > > > > -- > > Dave T=C3=A4ht > > Open Networking needs **Open Source Hardware** > > > > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 > > > > -- > Dave T=C3=A4ht > Open Networking needs **Open Source Hardware** > > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 > --089e0111c0166d016e0515887527 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I've made some changes and now this test displays the = "PDV" column as
simply the recent average increase on the bes= t latency seen, as usually the
best latency seen is pretty stable= . (It also should work in firefox too now).

In add= ition, every 30 seconds, a grade is printed next to a timestamp.
= I know how we all like grades :) the grade is based on the average of all
the PDVs, and ranges from A+ (5 milliseconds or less) down to F fo= r fail.

I'm not 100% happy with this PDV figur= e, a stellar connection - and no internet
congestion - will show = a low number that is stable and an A+ grade. A connection
with ji= tter will show a PDV that is half the average jitter amplitude. So far so g= ood.

But a connection with almost no jitter, but t= hat has visibly higher than minimal
latency, will show a failing = grade. And if this is a jitter / packet delay variation=C2=A0
typ= e test, I'm not sure about this situation. One could say it is a very g= ood=C2=A0
connection but because it is 30ms higher than just one = revealed optimal
ping, yet it might get a "D". Not sure= how common this state of things could
be though.

<= /div>
Also since it is a global test a component of the grade is also i= nternet
backbone congestion, and not necessarily an ISP or equipm= ent issue.


On Fri, May 8, 2015 at 9:09 AM, Dave Taht <dave.taht= @gmail.com> wrote:
On Thu, May 7, 2015 at 3:27 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, May 7, 2015 at 7:45 AM, Simon Ba= rber <simon@superduper.net&g= t; wrote:
>> The key figure for VoIP is maximum latency, or perhaps somewhere a= round 99th
>> percentile. Voice packets cannot be played out if they are late, s= o how late
>> they are is the only thing that matters. If many packets are early= but more
>> than a very small number are late, then the jitter buffer has to a= djust to
>> handle the late packets. Adjusting the jitter buffer disrupts the<= br> >> conversation, so ideally adjustments are infrequent. When maximum = latency
>> suddenly increases it becomes necessary to increase the buffer fai= rly
>> quickly to avoid a dropout in the conversation. Buffer reductions = can be
>> hidden by waiting for gaps in conversation. People get used to the= acoustic
>> round trip latency and learn how quickly to expect a reply from th= e other
>> person (unless latency is really too high), but adjustments interf= ere with
>> this learned expectation, so make it hard to interpret why the oth= er person
>> has paused. Thus adjustments to the buffering should be as infrequ= ent as
>> possible.
>>
>> Codel measures and tracks minimum latency in its inner 'interv= al' loop. For
>> VoIP the maximum is what counts. You can call it minimum + jitter,= but the
>> maximum is the important thing (not the absolute maximum, since a = very small
>> number of late packets are tolerable, but perhaps the 99th percent= ile).
>>
>> During a conversation it will take some time at the start to learn= the
>> characteristics of the link, but ideally the jitter buffer algorit= hm will
>> quickly get to a place where few adjustments are made. The more co= nservative
>> the buffer (higher delay above minimum) the less likely a future a= djustment
>> will be needed, hence a tendency towards larger buffers (and more = delay).
>>
>> Priority queueing is perfect for VoIP, since it can keep the jitte= r at a
>> single hop down to the transmission time for a single maximum size= packet.
>> Fair Queueing will often achieve the same thing, since VoIP stream= s are
>> often the lowest bandwidth ongoing stream on the link.
>
> Unfortunately this is more nuanced than this. Not for the first time > do I wish that email contained math, and/or we had put together a pape= r
> for this containing the relevant math. I do have a spreadsheet lying > around here somewhere...
>
> In the case of a drop tail queue, jitter is a function of the total > amount of data outstanding on the link by all the flows. A single
> big fat flow experiencing a drop will drop it's buffer occupancy > (and thus effect on other flows) by a lot on the next RTT. However
> a lot of fat flows will drop by less if drops are few. Total delay
> is the sum of all packets outstanding on the link.
>
> In the case of stochastic packet-fair queuing jitter is a function
> of the total number of bytes in each packet outstanding on the sum
> of the total number of flows. The total delay is the sum of the
> bytes delivered per packet per flow.
>
> In the case of DRR, jitter is a function of the total number of bytes<= br> > allowed by the quantum per flow outstanding on the link. The total
> delay experienced by the flow is a function of the amounts of
> bytes delivered with the number of flows.
>
> In the case of fq_codel, jitter is a function of of the total number > of bytes allowed by the quantum per flow outstanding on the link,
> with the sparse optimization pushing flows with no queue
> queue in the available window to the front. Furthermore
> codel acts to shorten the lengths of the queues overall.
>
> fq_codel's delay: when the arriving in new flow packet can be serv= iced
> in less time than the total number of flows' quantums, is a functi= on
> of the total number of flows that are not also building queues. When > the total service time for all flows exceeds the interval the voip
> packet is delivered in, and AND the quantum under which the algorithm<= br> > is delivering, fq_codel degrades to DRR behavior. (in other words,
> given enough queuing flows or enough new flows, you can steadily
> accrue delay on a voip flow under fq_codel). Predicting jitter is
> really hard to do here, but still pretty minimal compared to the
> alternatives above.

And to complexify it further if the total flows' service ti= me exceeds
the interval on which the voip flow is being delivered, the voip flow
can deliver a fq_codel quantum's worth of packets back to back.

Boy I wish I could explain all this better, and/or observe the results
on real jitter buffers in real apps.

>
> in the above 3 cases, hash collisions permute the result. Cake and
> fq_pie have a lot less collisions.

Which is not necessarily a panacea either. perfect flow isolation (cake) to hundreds of flows might be in some cases worse that
suffering hash collisions (fq_codel) for some workloads. sch_fq and
fq_pie have *perfect* flow isolation and I worry about the effects of
tons and tons of short flows (think ddos attacks) - I am comforted by
colliisions! and tend to think there is an ideal ratio of flows
allowed without queue management verses available bandwidth that we
don't know yet - as well as think for larger numbers of flows we
should be inheriting more global environmental (state of the link and
all queues) than we currently do in initializing both cake and
fq_codel queues.

Recently I did some tests of 450+ flows (details on the cake mailing
list) against sch_fq which got hopelessly buried (10000 packets in
queue). cake and fq_pie did a lot better.

> I am generally sanguine about this along the edge - from the internet<= br> > packets cannot be easily classified, yet most edge networks have more<= br> > bandwidth from that direction, thus able to fit WAY more flows in
> under 10ms, and outbound, from the home or small business, some
> classification can be effectively used in a X tier shaper (or cake) to=
> ensure better priority (still with fair) queuing for this special
> class of application - not that under most home workloads that this is=
> an issue. We think. We really need to do more benchmarking of web and<= br> > dash traffic loads.
>
>> Simon
>>
>> Sent with AquaMail for Android
>> http://www.= aqua-mail.com
>>
>> On May 7, 2015 6:16:00 AM jb <justin@dslr.net> wrote:
>>>
>>> I thought would be more sane too. I see mentioned online that = PDV is a
>>> gaussian distribution (around mean) but it looks more like hal= f a bell
>>> curve, with most numbers near the the lowest latency seen, and= getting
>>> progressively worse with
>>> less frequency.
>>> At least for DSL connections on good ISPs that scenario seems = more
>>> frequent.
>>> You "usually" get the best latency and "sometim= es" get spikes or fuzz on
>>> top of it.
>>>
>>> by the way after I posted I discovered Firefox has an issue wi= th this test
>>> so I had
>>> to block it with a message, my apologies if anyone wasted time= trying it
>>> with FF.
>>> Hopefully i can figure out why.
>>>
>>>
>>> On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <swmike@swm.pp.se>
>>> wrote:
>>>>
>>>> On Thu, 7 May 2015, jb wrote:
>>>>
>>>>> There is a web socket based jitter tester now. It is v= ery early stage
>>>>> but
>>>>> works ok.
>>>>>
>>>>> http://www.dslreports.com/speedtest?radar=3D1 >>>>>
>>>>> So the latency displayed is the mean latency from a ro= lling 60 sample
>>>>> buffer, Minimum latency is also displayed. and the +/-= PDV value is the mean
>>>>> difference between sequential pings in that same rolli= ng buffer. It is quite
>>>>> similar to the std.dev actually (not shown).
>>>>
>>>>
>>>> So I think there are two schools here, either you take ave= rage and
>>>> display + / - from that, but I think I prefer to take the = lowest of the last
>>>> 100 samples (or something), and then display PDV from that= "floor" value, ie
>>>> PDV can't ever be negative, it can only be positive. >>>>
>>>> Apart from that, the above multi-place RTT test is really = really nice,
>>>> thanks for doing this!
>>>>
>>>>
>>>> --
>>>> Mikael Abrahamsson=C2=A0 =C2=A0 email: swmike@swm.pp.se
>>>
>>>
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.buf= ferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferb= loat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
> --
> Dave T=C3=A4ht
> Open Networking needs **Open Source Hardware**
>
> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr= 67



--
Dave T=C3=A4ht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

--089e0111c0166d016e0515887527--