From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-33-ewr.dyndns.com (mxout-224-ewr.mailhop.org [216.146.33.224]) by lists.bufferbloat.net (Postfix) with ESMTP id 760412E075D for ; Fri, 4 Feb 2011 11:08:59 -0800 (PST) Received: from scan-32-ewr.mailhop.org (scan-32-ewr.local [10.0.141.238]) by mail-33-ewr.dyndns.com (Postfix) with ESMTP id D7DCF6F7CC7 for ; Fri, 4 Feb 2011 19:08:57 +0000 (UTC) X-Spam-Score: -1.0 () X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 209.85.216.43 Received: from mail-qw0-f43.google.com (mail-qw0-f43.google.com [209.85.216.43]) by mail-33-ewr.dyndns.com (Postfix) with ESMTP id 5BC4F6FB1C9 for ; Fri, 4 Feb 2011 19:08:49 +0000 (UTC) Received: by qwk3 with SMTP id 3so2006239qwk.16 for ; Fri, 04 Feb 2011 11:08:49 -0800 (PST) MIME-Version: 1.0 Received: by 10.224.28.134 with SMTP id m6mr11427161qac.145.1296846528975; Fri, 04 Feb 2011 11:08:48 -0800 (PST) Received: by 10.220.187.138 with HTTP; Fri, 4 Feb 2011 11:08:48 -0800 (PST) Date: Fri, 4 Feb 2011 21:08:48 +0200 Message-ID: From: Steve Davies To: bloat@lists.bufferbloat.net Content-Type: multipart/alternative; boundary=0015175cf712b012e2049b799ca0 Subject: [Bloat] Large buffers: deliberate optimisation for HTTP? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Feb 2011 19:08:59 -0000 --0015175cf712b012e2049b799ca0 Content-Type: text/plain; charset=ISO-8859-1 Hi, I'm a new subscriber and hardly a hardcore network programmer. But I have been working with IP for years and even have Douglas Comer's books... I was thinking about this issue of excess buffers. It occurred to me that the large buffers could be a deliberate strategy to optimise HTTP-style traffic. Having 1/2 MB or so of buffering towards the edge does mean that a typical web page and images etc can likely be "dumped" into those buffers "en-bloc". Or maybe its not so deliberate but just that testing has become fixated on "throughput" and impact latency and jitter has been ignored. If you have a spanking new Gb NIC the first thing you do is try some scps and see how close to line-speed you can get. And lots of buffering helps that test in the absence of real packet loss in the actual link. Perhaps the reality is that our traffic patterns are not typical of broadband link usage and that these large buffers that mess up our usage patterns (interactive traffic mixed with bulk data), actually benefit the majority usage pattern which is "chunky bursts". Would you agree with my logic? Steve --0015175cf712b012e2049b799ca0 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi,

I'm a new subscriber and hardly a hardcore netwo= rk programmer. =A0But I have been working with IP for years and even have D= ouglas Comer's books... =A0

I was thinking abo= ut this issue of excess buffers. =A0It occurred to me that the large buffer= s could be a deliberate strategy to optimise HTTP-style traffic. =A0Having = 1/2 MB or so of buffering towards the edge does mean that a typical web pag= e and images etc can likely be "dumped" into those buffers "= en-bloc".

Or maybe its not so deliberate but just that testing ha= s become fixated on "throughput" and impact latency and jitter ha= s been ignored. =A0If you have a spanking new Gb NIC the first thing you do= is try some scps and see how close to line-speed you can get. =A0And lots = of buffering helps that test in the absence of real packet loss in the actu= al link.

Perhaps the reality is that our traffic patterns are no= t typical of broadband link usage and that these large buffers that mess up= our usage patterns (interactive traffic mixed with bulk data), actually be= nefit the majority usage pattern which is "chunky bursts".

Would you agree with my logic?

Steve

--0015175cf712b012e2049b799ca0--