From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-01-ewr.dyndns.com (mxout-183-ewr.mailhop.org [216.146.33.183]) by lists.bufferbloat.net (Postfix) with ESMTP id C9EFD2E00B9 for ; Fri, 4 Feb 2011 14:39:20 -0800 (PST) Received: from scan-02-ewr.mailhop.org (scan-02-ewr.local [10.0.141.224]) by mail-01-ewr.dyndns.com (Postfix) with ESMTP id 015A71F53B9 for ; Fri, 4 Feb 2011 22:39:19 +0000 (UTC) X-Spam-Score: 0.0 () X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 213.165.64.23 Received: from mailout-de.gmx.net (mailout-de.gmx.net [213.165.64.23]) by mail-01-ewr.dyndns.com (Postfix) with SMTP id 09E101F4EFB for ; Fri, 4 Feb 2011 22:39:14 +0000 (UTC) Received: (qmail invoked by alias); 04 Feb 2011 22:39:14 -0000 Received: from unknown (EHLO srichardlxp2) [213.143.107.142] by mail.gmx.net (mp061) with SMTP; 04 Feb 2011 23:39:14 +0100 X-Authenticated: #20720068 X-Provags-ID: V01U2FsdGVkX1/xrEIImezxBIhyOPHuOmTKyM0kn8kU4b4kmCRiUk kAzOPHyS9xchy9 Message-ID: From: "Richard Scheffenegger" To: "Steve Davies" , References: Date: Fri, 4 Feb 2011 23:35:27 +0100 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_5095_01CBC4C4.345215E0" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5994 X-Y-GMX-Trusted: 0 Subject: Re: [Bloat] Large buffers: deliberate optimisation for HTTP? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Feb 2011 22:39:21 -0000 This is a multi-part message in MIME format. ------=_NextPart_000_5095_01CBC4C4.345215E0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable No. ----- Original Message -----=20 From: Steve Davies=20 To: bloat@lists.bufferbloat.net=20 Sent: Friday, February 04, 2011 8:08 PM Subject: [Bloat] Large buffers: deliberate optimisation for HTTP? Hi, I'm a new subscriber and hardly a hardcore network programmer. But I = have been working with IP for years and even have Douglas Comer's = books... =20 I was thinking about this issue of excess buffers. It occurred to me = that the large buffers could be a deliberate strategy to optimise = HTTP-style traffic. Having 1/2 MB or so of buffering towards the edge = does mean that a typical web page and images etc can likely be "dumped" = into those buffers "en-bloc". Or maybe its not so deliberate but just that testing has become = fixated on "throughput" and impact latency and jitter has been ignored. = If you have a spanking new Gb NIC the first thing you do is try some = scps and see how close to line-speed you can get. And lots of buffering = helps that test in the absence of real packet loss in the actual link. Perhaps the reality is that our traffic patterns are not typical of = broadband link usage and that these large buffers that mess up our usage = patterns (interactive traffic mixed with bulk data), actually benefit = the majority usage pattern which is "chunky bursts". Would you agree with my logic? Steve -------------------------------------------------------------------------= ----- _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat ------=_NextPart_000_5095_01CBC4C4.345215E0 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
 
No.
----- Original Message -----
From:=20 Steve Davies
Sent: Friday, February 04, 2011 = 8:08=20 PM
Subject: [Bloat] Large buffers: = deliberate optimisation for HTTP?

Hi,

I'm a new subscriber and hardly a hardcore network programmer. =  But=20 I have been working with IP for years and even have Douglas Comer's = books...=20  

I was thinking about this issue of excess buffers.  It = occurred to=20 me that the large buffers could be a deliberate strategy to optimise=20 HTTP-style traffic.  Having 1/2 MB or so of buffering towards the = edge=20 does mean that a typical web page and images etc can likely be = "dumped" into=20 those buffers "en-bloc".

Or maybe its not so deliberate but just that testing has become = fixated=20 on "throughput" and impact latency and jitter has been ignored. =  If you=20 have a spanking new Gb NIC the first thing you do is try some scps and = see how=20 close to line-speed you can get.  And lots of buffering helps = that test=20 in the absence of real packet loss in the actual link.

Perhaps the reality is that our traffic patterns are not typical = of=20 broadband link usage and that these large buffers that mess up our = usage=20 patterns (interactive traffic mixed with bulk data), actually benefit = the=20 majority usage pattern which is "chunky bursts".

Would you agree with my logic?

Steve


_______________________________________________
Bloat = mailing=20 = list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/list= info/bloat
------=_NextPart_000_5095_01CBC4C4.345215E0--