From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp104.iad3a.emailsrvr.com (smtp104.iad3a.emailsrvr.com [173.203.187.104]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id BCB0221F113 for ; Fri, 3 Jan 2014 10:18:14 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp6.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 111951A80D3; Fri, 3 Jan 2014 13:17:45 -0500 (EST) X-Virus-Scanned: OK Received: from app50.wa-webapps.iad3a (relay.iad3a.rsapps.net [172.27.255.110]) by smtp6.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id E2E201A80C7; Fri, 3 Jan 2014 13:17:44 -0500 (EST) Received: from reed.com (localhost.localdomain [127.0.0.1]) by app50.wa-webapps.iad3a (Postfix) with ESMTP id D2102380049; Fri, 3 Jan 2014 13:17:44 -0500 (EST) Received: by apps.rackspace.com (Authenticated sender: dpreed@reed.com, from: dpreed@reed.com) with HTTP; Fri, 3 Jan 2014 13:17:44 -0500 (EST) Date: Fri, 3 Jan 2014 13:17:44 -0500 (EST) From: dpreed@reed.com To: "=?utf-8?Q?Fred_Baker_=28fred=29?=" MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_20140103131744000000_67191" Importance: Normal X-Priority: 3 (Normal) X-Type: html In-Reply-To: <533BE7A9-7804-4A74-BBFB-C75CCE212434@cisco.com> References: <201312151857.rBFIuuea043478@gateway0.ipv6.occnc.com> <533BE7A9-7804-4A74-BBFB-C75CCE212434@cisco.com> Message-ID: <1388773064.857924560@apps.rackspace.com> X-Mailer: webmail7.0 Cc: bloat , "aqm@ietf.org" , "=?utf-8?Q?=3Ccurtis=40ipv6.occnc.com=3E?=" Subject: Re: [Bloat] =?utf-8?q?=5Be2e=5D_=5Baqm=5D_What_is_a_good_burst=3F_--_?= =?utf-8?q?AQM_evaluation_guidelines?= X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Jan 2014 18:18:28 -0000 ------=_20140103131744000000_67191 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =0AEnd-to-end queueing delay (aggregate of delays in all queues except for = the queues in the endpoints themselves) should typically never (never means= <99.9% of any hour-long period) exceed 200 msec. in the worst case, and i= f at all possible never exceed 100 msec. in networks capable of carrying = more than 1 Mbit/sec to and from endpoints (I would call that high-bitrate = nets, the stage up from "dialup" networks).=0A =0AThere are two reasons for= this:=0A =0A1) round-trip "RPC" response time for interactive applications= > 100 msec. become unreasonable.=0A =0A2) flow control at the source that = stanches the entry of data into the network (which can be either switching = media codecs or just pushing back on the application rate - whether it is d= riven by the receiver or the sender, both of which are common) must respond= quickly, lest more packets be dumped into the network that sustain congest= ion.=0A =0AFairness is a different axis, but I strongly suggest that there = are other ways to achieve approximate fairness of any desired type without = building up queues in routers. It's perfectly reasonable to remember (in a= ll the memory that *would otherwise have caused trouble by holding packets = rather than discarding them*) the source/dest information and sizes of rece= ntly processed (forwarded or discarded) packets. This information takes le= ss space than the packets themselves, of course! It can even be further co= mpressed by "coding or hashing" techniques. Such live data about *recent b= ehavior* is all you need for fairness in balancing signaling back to the so= urce.=0A =0AIf all of the brainpower on this list cannot take that previous= paragraph and expand it to implement the solution I am talking about, I wo= uld be happy (at my consulting rates, which are quite high) to write the co= de for you. But I have a day job that involves low-level scheduling and qu= eueing work in a different domain of application.=0A =0ACan we please get r= id of the nonsense that implies that the only information one can have at a= router/switch is the set of packets that are clogging its outbound queues?= Study some computer algorithms that provide memory of recent history.... = and please, please, please stop insisting that intra-network queues should= build up for any reason whatsoever other than instantaneous transient burs= tiness of convergent traffic. They should persist as briefly as possible, = and not be sustained for some kind of "optimum" throughput that can be gain= ed by reframing the problem.=0A =0A =0A =0A=0A=0AOn Thursday, January 2, 20= 14 1:31am, "Fred Baker (fred)" said:=0A=0A=0A=0A> =0A> On = Dec 15, 2013, at 10:56 AM, Curtis Villamizar =0A> wr= ote:=0A> =0A> > So briefly, my answer is: as a WG, I don't think we want to= go there.=0A> > If we do go there at all, then we should define "good AQM"= in terms of=0A> > acheving a "good" tradeoff between fairness, bulk transf= er goodput,=0A> > and bounded delay. IMHO sometimes vague is better.=0A> = =0A> As you may have worked out from my previous comments in these threads,= I agree=0A> with you. I don't think this can be nailed down in a universal= sense. What can be=0A> described is the result in the network, in that del= ays build up that persist, as=0A> opposed to coming and going, and as a res= ult applications don't work as well as=0A> they might - and at that point, = it is appropriate for the network to inform the=0A> transport.=0A> ------=_20140103131744000000_67191 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

End-to-end= queueing delay (aggregate of delays in all queues except for the queues in= the endpoints themselves) should typically never (never means <99.9% of= any hour-long period)  exceed 200 msec. in the worst case, and if at = all possible never exceed 100 msec.   in networks capable of carrying = more than 1 Mbit/sec to and from endpoints (I would call that high-bitrate = nets, the stage up from "dialup" networks).

=0A

 

=0A

There are two reasons= for this:

=0A

 

=0A

1) round-trip "RPC" response time for interactive appl= ications > 100 msec. become unreasonable.

=0A

 

=0A

2) flow control at t= he source that stanches the entry of data into the network (which can be ei= ther switching media codecs or just pushing back on the application rate - = whether it is driven by the receiver or the sender, both of which are commo= n) must respond quickly, lest more packets be dumped into the network that = sustain congestion.

=0A

 

=0A

Fairness is a different axis, but I strongly = suggest that there are other ways to achieve approximate fairness of any de= sired type without building up queues in routers.  It's perfectly reas= onable to remember (in all the memory that *would otherwise have caused tro= uble by holding packets rather than discarding them*) the source/dest infor= mation and sizes of recently processed (forwarded or discarded) packets. &n= bsp;This information takes less space than the packets themselves, of cours= e!  It can even be further compressed by "coding or hashing" technique= s.  Such live data about *recent behavior* is all you need for fairnes= s in balancing signaling back to the source.

=0A

 

=0A

If all of the brainp= ower on this list cannot take that previous paragraph and expand it to impl= ement the solution I am talking about, I would be happy (at my consulting r= ates, which are quite high) to write the code for you.  But I have a d= ay job that involves low-level scheduling and queueing work in a different = domain of application.

=0A

 

=0A=

Can we please get rid of the nonsense that= implies that the only information one can have at a router/switch is the s= et of packets that are clogging its outbound queues?  Study some compu= ter algorithms that provide memory of recent history....  and please, = please, please stop insisting that intra-network queues should build up for= any reason whatsoever other than instantaneous transient burstiness of con= vergent traffic.  They should persist as briefly as possible, and not = be sustained for some kind of "optimum" throughput that can be gained by re= framing the problem.

=0A

 

=0A 

=0A

 

=0A=0A



On Thursday, January 2,= 2014 1:31am, "Fred Baker (fred)" <fred@cisco.com> said:

<= /p>=0A

=0A

= >
> On Dec 15, 2013, at 10:56 AM, Curtis Villamizar <curtis@= ipv6.occnc.com>
> wrote:
>
> > So briefly, m= y answer is: as a WG, I don't think we want to go there.
> > If = we do go there at all, then we should define "good AQM" in terms of
&g= t; > acheving a "good" tradeoff between fairness, bulk transfer goodput,=
> > and bounded delay. IMHO sometimes vague is better.
&g= t;
> As you may have worked out from my previous comments in these= threads, I agree
> with you. I don't think this can be nailed down= in a universal sense. What can be
> described is the result in the= network, in that delays build up that persist, as
> opposed to com= ing and going, and as a result applications don't work as well as
>= they might - and at that point, it is appropriate for the network to infor= m the
> transport.
>

=0A
------=_20140103131744000000_67191--