From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-11-ewr.dyndns.com (mxout-060-ewr.mailhop.org [216.146.33.60]) by lists.bufferbloat.net (Postfix) with ESMTP id 1AF1F2E04C5 for ; Tue, 8 Feb 2011 10:32:03 -0800 (PST) Received: from scan-12-ewr.mailhop.org (scan-12-ewr.local [10.0.141.230]) by mail-11-ewr.dyndns.com (Postfix) with ESMTP id 6A80C92FB13 for ; Tue, 8 Feb 2011 18:32:02 +0000 (UTC) X-Spam-Score: 0.0 () X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 64.59.134.9 Received: from idcmail-mo2no.shaw.ca (idcmail-mo2no.shaw.ca [64.59.134.9]) by mail-11-ewr.dyndns.com (Postfix) with ESMTP id E792092FB74 for ; Tue, 8 Feb 2011 18:32:01 +0000 (UTC) Received: from pd7ml2no-ssvc.prod.shaw.ca ([10.0.153.162]) by pd7mo1no-svcs.prod.shaw.ca with ESMTP; 08 Feb 2011 11:32:01 -0700 X-Cloudmark-SP-Filtered: true X-Cloudmark-SP-Result: v=1.1 cv=RoojVSJIJluF7nPEL/ZfEjhFDjN56szB521nsKvm9G4= c=1 sm=1 a=VTUP8yi54EUA:10 a=BLceEmwcHowA:10 a=wPDyFdB5xvgA:10 a=xqWC_Br6kY4A:10 a=EvaGpPYFoCfc2jwbaD6Azw==:17 a=pGLkceISAAAA:8 a=3dZX8JWgAAAA:8 a=b7SLfKwVAAAA:8 a=Am-xXlIFMyWQxw1OQCYA:9 a=I8HzInG5jVqeDkzAmeUD4P-OluUA:4 a=Fw8iwiUKpeAA:10 a=MSl-tDqOz04A:10 a=TphoKWqS9HQA:10 a=xeVa5JZ9b2BE6OCx:21 a=LisTD-SuBCQn2Fik:21 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117 Received: from unknown (HELO amd.pacdat.net) ([96.48.77.169]) by pd7ml2no-dmz.prod.shaw.ca with ESMTP; 08 Feb 2011 11:32:00 -0700 Received: from localhost ([::1]) by amd.pacdat.net with esmtp (Exim 4.69) (envelope-from ) id 1PmsM1-0002PH-Bk; Tue, 08 Feb 2011 10:32:00 -0800 From: richard To: esr@thyrsus.com In-Reply-To: <20110208181811.GD7744@thyrsus.com> References: <20110205132305.GA29396@thyrsus.com> <20110208181811.GD7744@thyrsus.com> Content-Type: text/plain Date: Tue, 08 Feb 2011 10:31:56 -0800 Message-Id: <1297189916.26293.29.camel@amd.pacdat.net> Mime-Version: 1.0 X-Mailer: Evolution 2.26.3 (2.26.3-1.fc11) Content-Transfer-Encoding: 7bit X-Spam_score: -2.9 X-Spam_score_int: -28 X-Spam_bar: -- Cc: bloat@lists.bufferbloat.net Subject: Re: [Bloat] First draft of complete "Bufferbloat And You" enclosed. X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Feb 2011 18:32:04 -0000 On Tue, 2011-02-08 at 13:18 -0500, Eric Raymond wrote: > Justin McCann : > > This may be intentional, but the text launches into an explanation of > > why bufferbloat is bad without concisely explaining what it is--- you > > have to read the whole first two sections before it's very clear. > > Not intentional, exactly, but's inherent. Thec reader *can't* get what > bufferbloat. > > > The second of the three main tactics states, "Second, we can decrease > > buffer sizes. This cuts the delay due to latency and decreases the > > clumping effect on the traffic." Latency *is* delay; perhaps "cuts the > > delay due to buffering" or "due to queueing" would be better, if more > > tech-ese. > > Good catch, I'll fix. > > > I've re-read through the Bell Labs talk, and some of the earlier > > posts, but could someone explain the "clumping" effect? I understand > > the wild variations in congestion windows ("swing[ing] rapidly and > > crazily between emptiness and overload"), but clumping makes me think > > of closely spaced packet intervals. > > It's intended to. This is what I got from jg's talk, and I wrote the > SOQU scenario to illustrate it. If my understanding is incorrect (and > I see that you are saying it is) one of the real networking people > here needs to whack me with the enlightenment stick. > > The underlying image in my just-so stories about roads and parking lots > is that packet flow coming in smooth on the upstream side of a buffwer > gets turned into a buffer fill, followed by a burst of packets as it > overflows, followed by more data coming into the buffer, followed by > overflow...repeat. My electronics (analog, tube, etc.) background makes me view a lot of this as "tuned" circuits - capacitors, resistors, coils, etc. If I read things correctly, there are a number of different ways buffers are used/abused. They're all FIFO (I hope - somebody disabuse me of this idea if they have evidence) but how they deal with high/low water marks seems to make a difference. Actual processor capabilities (and overall system load) may also play a role - the embedded stack processor and/or the system's CPU including things like interrupt load, bus mastering, DMA, etc. If the interface is capable of full bandwidth in and out at the same time, and high/low water mark detection is quick, then I'd think this is a circuit that has little tendency to oscillate at any low, detectable frequency. If the interface is not capable of full bandwidth in and out at the same time, and/or the detection of or settings for high/low water mark in the buffer are screwy, then the system will oscillate at a low frequency and you'll get clumping. I'd expect to see this on cheap Gbit Ethernet cards on PCI bus (lots of interrupts to the main CPU) as the system's load rises for example; one of the reasons I've stopped using them, even on lightly loaded links. richard -- Richard C. Pitt Pacific Data Capture rcpitt@pacdat.net 604-644-9265 http://digital-rag.com www.pacdat.net PGP Fingerprint: FCEF 167D 151B 64C4 3333 57F0 4F18 AF98 9F59 DD73