From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 0946C21F248 for ; Mon, 26 May 2014 06:02:16 -0700 (PDT) Received: by uplift.swm.pp.se (Postfix, from userid 501) id 4E5669C; Mon, 26 May 2014 15:02:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=swm.pp.se; s=mail; t=1401109333; bh=O4u93lXooKlfFYgwRzDgeGTRDbETGiUBMbsHToaOksI=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=yYlfx2raT0Gly815PlDzuwB5rmSy7AnEPxv+1raG3mE9WEL9D8EhvGjOk29HM3PTy 167YrAIePQd9gJe8GJUGR7HcESBhv1XGdLADAunTNuK+6sB3cM4TMD8WeaIbn5ZNFY XuhsfMlnIziwYRiL7hdUNiBS2sS5miQBp1StrdAs= Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 4669A9A; Mon, 26 May 2014 15:02:13 +0200 (CEST) Date: Mon, 26 May 2014 15:02:13 +0200 (CEST) From: Mikael Abrahamsson To: dpreed@reed.com In-Reply-To: <1401079740.21369945@apps.rackspace.com> Message-ID: References: <1401048053.664331760@apps.rackspace.com> <1401079740.21369945@apps.rackspace.com> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) Organization: People's Front Against WWW MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Cc: cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] Ubiquiti QOS X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 May 2014 13:02:17 -0000 On Mon, 26 May 2014, dpreed@reed.com wrote: > Len Kleinrock and his student proved that the "optimal" state for > throughput in the internet is the 1-2 buffer case. It's easy to think > this through... Yes, but how do we achieve it? If you signal congestion with very small buffer depth used, TCP will back off and you will drain the buffer, meaning the link will be underutilized. This is great from an interactive point of view, but not so much for keeping the link used actually at capacity without incurring excessive buffering latency? So you would like to see ECN drop=1 markings on all packets as soon as they're the 3rd (or deeper) packet in the buffer? Or if the packet doesn't have ECN markings, you'd just drop it? I doubt this will create a beneficial system for the end user, sounds like it focuses too much on interactivity and too little on throughput. I just don't buy your statement that adding buffers won't increase throughput. If you're optimizing for throughput, then you let a single session use 1 second of buffering, meaning you know for a fact that the link is always going to be used at 100%. This totally kills interactivity, but it's still more throughput efficient than having 2 packet deep buffers, where you're very likely to drain these two packets and then have no packets in the buffer, meaning the link will be underutilized. So, I'd agree that a lot of the time you need very little buffers, but stating you need a buffer of 2 packets deep regardless of speed, well, I don't see how that would work. -- Mikael Abrahamsson email: swmike@swm.pp.se