[Bloat] [aqm] [e2e] What is a good burst? -- AQM evaluation guidelines

Dave Taht dave.taht at gmail.com
Thu Jan 30 14:27:16 EST 2014


On Fri, Jan 3, 2014 at 10:17 AM,  <dpreed at reed.com> wrote:
> End-to-end queueing delay (aggregate of delays in all queues except for the
> queues in the endpoints themselves) should typically never (never means
> <99.9% of any hour-long period)  exceed 200 msec. in the worst case, and if
> at all possible never exceed 100 msec.

Latency requirements are a budget that can get spent at multiple levels in
a stack. If you were to say that total latency/propagation delay
should not exceed lightspeed requirements + a few ms, I'd agree with
you harder than I already do.

http://gettys.wordpress.com/2013/07/10/low-latency-requires-smart-queuing-traditional-aqm-is-not-enough/

I also generally prefer predictable latency (low jitter), not just from a human
factors perspective but as a means of doing congestion avoidance
detection better.

> in networks capable of carrying
> more than 1 Mbit/sec to and from endpoints (I would call that high-bitrate
> nets, the stage up from "dialup" networks).

Here are some recent tests of fq_codel on comcast ipv6 over a coast to
coast path (california to maine. Only *65ms* inherent delay - quite
good)

http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html

Utilization ~100%, total induced latency under 10ms (some of this
coming from the rate limited upstream and HZ=250 cpu scheduler delay
for the rate limiter, not the core algorithms)

the results with pie or codel alone are considerably worse, (40-60ms
induced delay) but not bad compared to the alternative of no
aqm/packet scheduling as shown in the 2nd and 3rd graph.

I'm happy. There seems to be not a lot of slop left to fix in
fq_codel, and amdahls law takes queuing delay out of the latency
equation for applications...

well, aside from getting the CMTSes/dslams/modems fixed... :(

I would additionally distinguish in the range between 100mbit networks
and 1g+ networks, (as well as wireless/wifi/cable) where bursty
technologies are seemingly needed (so long as bursts are below human
perceptible latency factors)

I have a test of an intel nuc with linux 3.11 and pfifo_fast (the
default qdisc), on a gbit lan vs fq_codel, and with fq_codel and
tso/gso/ufo offloading off, lying around somewhere. with pfifo_fast
you end up with one stream using more throughput than the others and
about 8ms latency, fq_codel, 2.2, and without tso/gso you can't
saturate the medium.

I have some hopes the new fq scheduler and some tso fixes make the
results better, I think 2ms of induced queue latency on a gig lan is a lot.

> There are two reasons for this:
> 1) round-trip "RPC" response time for interactive applications > 100 msec.
> become unreasonable.

Add in the whole list of human factors issues noted in the url above.

> 2) flow control at the source that stanches the entry of data into the
> network (which can be either switching media codecs or just pushing back on
> the application rate - whether it is driven by the receiver or the sender,
> both of which are common) must respond quickly, lest more packets be dumped
> into the network that sustain congestion.

A big problem is predictability. recently there has been something of a push
to get retries and retransmits on wifi to "give up" at 50ms induced latency,
instead of 250ms (which some vendors try to do)

We have an inherent quantum problem of 1-4ms per txop in present
day wifi that seems to make 50ms a barely achievable outer limit in
the presence of multiple stations.

> Fairness is a different axis, but I strongly suggest that there are other
> ways to achieve approximate fairness of any desired type without building up
> queues in routers.  It's perfectly reasonable to remember (in all the memory
> that *would otherwise have caused trouble by holding packets rather than
> discarding them*) the source/dest information and sizes of recently
> processed (forwarded or discarded) packets.  This information takes less
> space than the packets themselves, of course!  It can even be further
> compressed by "coding or hashing" techniques.  Such live data about *recent
> behavior* is all you need for fairness in balancing signaling back to the
> source.

I concur. Long on my todo list for *codel has been gaining the ability to
toss drop/mark/current bandwidth information on packets up to userspace,
where it could be used to make more intelligent routing decisions, and/or
feed more information back into the senders. I don't think "source quench"
is going to work, tho...

I am encouraged by recent work in openflow in this area.

> If all of the brainpower on this list cannot take that previous paragraph
> and expand it to implement the solution I am talking about, I would be happy
> (at my consulting rates, which are quite high) to write the code for you.
> But I have a day job that involves low-level scheduling and queueing work in
> a different domain of application.
>
>
>
> Can we please get rid of the nonsense that implies that the only information
> one can have at a router/switch is the set of packets that are clogging its
> outbound queues?  Study some computer algorithms that provide memory of
> recent history....

Multiple devices are still constrained by memory and cpu and the
needs for low latency insides those devices. At 10Gig you have ns
to make decisions in. On 802.11ac, you have us.

Thus we end up with dedicated hardware and software doing these
jobs that have issues doing stuff in-band, or even out of band.

that said, softer routers are doing hundreds of gig, 802.11ac devices
contain a lot of hardware assist and a dedicated cpu (but closed
firmware), and
the future looks bright for smarter hardware.

btw:

we have (at least temporarily) hit a performance wall on the hardware we
use in cerowrt - not in the aqm algorithms, but in the software rate
limiter, which peaks out currently at about 60mbits. Something faster
than the
current htb algo is needed... (suggestions?) (or a switch to faster
hardware than the 7 year old chipset cero uses)

> and please, please, please stop insisting that
> intra-network queues should build up for any reason whatsoever other than
> instantaneous transient burstiness of convergent traffic.  They should
> persist as briefly as possible, and not be sustained for some kind of
> "optimum" throughput that can be gained by reframing the problem.

well I outlined that bursts are needed in some technologies to keep them
operating at good throughput. Thus, BQL for linux ethernet, and some
similar techniques under consideration for wifi.

but largely agree. Anybody want to aim for 5ms queue delay on
intercontinental links?

>
>
>
>
>
>
>
>
>
> On Thursday, January 2, 2014 1:31am, "Fred Baker (fred)" <fred at cisco.com>
> said:
>
>>
>> On Dec 15, 2013, at 10:56 AM, Curtis Villamizar <curtis at ipv6.occnc.com>
>> wrote:
>>
>> > So briefly, my answer is: as a WG, I don't think we want to go there.
>> > If we do go there at all, then we should define "good AQM" in terms of
>> > acheving a "good" tradeoff between fairness, bulk transfer goodput,
>> > and bounded delay. IMHO sometimes vague is better.
>>
>> As you may have worked out from my previous comments in these threads, I
>> agree
>> with you. I don't think this can be nailed down in a universal sense. What
>> can be
>> described is the result in the network, in that delays build up that
>> persist, as
>> opposed to coming and going, and as a result applications don't work as
>> well as
>> they might - and at that point, it is appropriate for the network to
>> inform the
>> transport.
>>
>
>
> _______________________________________________
> aqm mailing list
> aqm at ietf.org
> https://www.ietf.org/mailman/listinfo/aqm
>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



More information about the Bloat mailing list