From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp81.iad3a.emailsrvr.com (smtp81.iad3a.emailsrvr.com [173.203.187.81]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id A9A0221F279 for ; Mon, 26 May 2014 07:01:20 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp27.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 7CE9221830B; Mon, 26 May 2014 10:01:19 -0400 (EDT) X-Virus-Scanned: OK Received: from app3.wa-webapps.iad3a (relay.iad3a.rsapps.net [172.27.255.110]) by smtp27.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 630692182F3; Mon, 26 May 2014 10:01:19 -0400 (EDT) Received: from reed.com (localhost.localdomain [127.0.0.1]) by app3.wa-webapps.iad3a (Postfix) with ESMTP id 4EF8A280052; Mon, 26 May 2014 10:01:19 -0400 (EDT) Received: by apps.rackspace.com (Authenticated sender: dpreed@reed.com, from: dpreed@reed.com) with HTTP; Mon, 26 May 2014 10:01:19 -0400 (EDT) Date: Mon, 26 May 2014 10:01:19 -0400 (EDT) From: dpreed@reed.com To: "Mikael Abrahamsson" MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_20140526100119000000_20490" Importance: Normal X-Priority: 3 (Normal) X-Type: html In-Reply-To: References: <1401048053.664331760@apps.rackspace.com> <1401079740.21369945@apps.rackspace.com> Message-ID: <1401112879.32226492@apps.rackspace.com> X-Mailer: webmail7.0 Cc: cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] Ubiquiti QOS X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 May 2014 14:01:21 -0000 ------=_20140526100119000000_20490 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =0A =0AOn Monday, May 26, 2014 9:02am, "Mikael Abrahamsson" said:=0A=0A=0A=0A> So, I'd agree that a lot of the time you need very l= ittle buffers, but=0A> stating you need a buffer of 2 packets deep regardle= ss of speed, well, I=0A> don't see how that would work.=0A>=0A =0AMy main p= oint is that looking to increased buffering to achieve throughput while mai= ntaining latency is not that helpful, and often causes more harm than good.= There are alternatives to buffering that can be managed more dynamically (= managing bunching and something I didn't mention - spreading flows or packe= ts within flows across multiple routes when a bottleneck appears - are some= of them).=0A =0AI would look to queue minimization rather than "queue mana= gement" (which implied queues are often long) as a goal, and think harder a= bout the end-to-end problem of minimizing total end-to-end queueing delay w= hile maximizing throughput.=0A =0AIt's clearly a totally false tradeoff bet= ween throughput and latency - in the IP framework. There is no such tradeo= ff for the operating point. There may be such a tradeoff for certain speci= fic implementations of TCP, but that's not fixed in stone.=0A ------=_20140526100119000000_20490 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

 

= =0A

On Monday, May 26, 2014 9:02am, "Mikael= Abrahamsson" <swmike@swm.pp.se> said:

=0A
=0A

> So, I'd agree= that a lot of the time you need very little buffers, but
> stating= you need a buffer of 2 packets deep regardless of speed, well, I
>= don't see how that would work.
>

=0A

 

=0A

My main point is that l= ooking to increased buffering to achieve throughput while maintaining laten= cy is not that helpful, and often causes more harm than good. There are alt= ernatives to buffering that can be managed more dynamically (managing bunch= ing and something I didn't mention - spreading flows or packets within flow= s across multiple routes when a bottleneck appears - are some of them).

= =0A

 

=0A

I would look to queue minimization rather than "queue management" (w= hich implied queues are often long) as a goal, and think harder about the e= nd-to-end problem of minimizing total end-to-end queueing delay while maxim= izing throughput.

=0A

 

=0A

It's clearly a totally false tradeoff between t= hroughput and latency - in the IP framework.  There is no such tradeof= f for the operating point.  There may be such a tradeoff for certain s= pecific implementations of TCP, but that's not fixed in stone.

=0A

 

=0A
------=_20140526100119000000_20490--