From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pb0-f43.google.com (mail-pb0-f43.google.com [209.85.160.43]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 91B10201B52 for ; Fri, 31 Aug 2012 10:15:50 -0700 (PDT) Received: by pbbrq2 with SMTP id rq2so7254261pbb.16 for ; Fri, 31 Aug 2012 10:15:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:organization:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=7RqY4w9wK9V5hVEmP9vo4jr87bL2pYgpnJU0UwDrfyA=; b=DPn8uUiI9TMZbeOuw3NwCN4gCRQcnBhlXlR0vo4Yd+lxRZ1zISztqPl8ijDuJkyry/ h/1Emi4oHwF0XPiRQClyu7theOeyiJQRY7yMn7SYfN3FUa0/0YJnP6R20w4s9KA6ku4S ctbUyLoN9dS1HrML3iY4VHfgAvU7TUME9XWVi2pvvXj2at7/aDSD9TDhG4HE0zsHvmL0 v/1TI8613hTa4l0EoJ7fYu5fSs8YVYxIzFN3CwlMpSaA5+zMIt/ahAqDuYENgvzbiKMV A23TdtQMu8kK1Au+q4vS9CdIRSMq98mdlYEteP8gvrGj3i3ctT8lf3ZB80dzi+mrXo04 ZgOQ== Received: by 10.68.218.196 with SMTP id pi4mr18150879pbc.128.1346433350230; Fri, 31 Aug 2012 10:15:50 -0700 (PDT) Received: from [10.11.2.117] ([38.96.16.75]) by mx.google.com with ESMTPS id y11sm3801886pbv.66.2012.08.31.10.15.48 (version=SSLv3 cipher=OTHER); Fri, 31 Aug 2012 10:15:49 -0700 (PDT) Sender: Jim Gettys Message-ID: <5040F143.2010108@freedesktop.org> Date: Fri, 31 Aug 2012 10:15:47 -0700 From: Jim Gettys Organization: Bell Labs User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0 MIME-Version: 1.0 To: Jonathan Morton References: <1346396137.2586.301.camel@edumazet-glaptop> <5040DDE9.7030507@hp.com> <5040E8EE.2070900@freedesktop.org> <187CE1B3-6772-4E1C-A983-3AEC637C04FE@gmail.com> In-Reply-To: <187CE1B3-6772-4E1C-A983-3AEC637C04FE@gmail.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: codel@lists.bufferbloat.net Subject: Re: [Codel] fq_codel : interval servo X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Aug 2012 17:15:50 -0000 On 08/31/2012 09:49 AM, Jonathan Morton wrote: > On 31 Aug, 2012, at 7:40 pm, Jim Gettys wrote: > >> On 08/31/2012 08:53 AM, Rick Jones wrote: >>> On 08/30/2012 11:55 PM, Eric Dumazet wrote: >>>> On locally generated TCP traffic (host), we can override the 100 ms >>>> interval value using the more accurate RTT estimation maintained by TCP >>>> stack (tp->srtt) >>>> >>>> Datacenter workload benefits using shorter feedback (say if RTT is below >>>> 1 ms, we can react 100 times faster to a congestion) >>>> >>>> Idea from Yuchung Cheng. >>> Mileage varies of course, but what are the odds of a datacenter's >>> end-system's NIC(s) being the bottleneck point? >> Ergo my comment about Ethernet flow control finally being possibly more >> help than hurt; clearly if the bottleneck is kept in the sending host >> more of the time, it would help. >> >> I certainly don't know how often the end-system's NIC's are the >> bottleneck today without flow control; maybe a datacenter type might >> have insight into that. > Consider a fileserver with ganged 10GE NICs serving an office full of GigE workstations. > > At 9am on Monday, everyone arrives and switches on their workstation, which (because the org has made them diskless) causes pretty much the same set of data to be sent to each in rapid succession. The fileserver satisfies all but the first of these from cache, so it can saturate all of it's NICs in theory. In that case a queue should exist even if there are no downstream bottlenecks. > > Alternatively, one floor at a time boots up at once - say the call-centre starts up at 7am, the developers stumble in at 10am, and the management types wander in at 11:30. :-) Then the bottleneck is the single 10GE link to each floor, rather than the fileserver's own NICs. > > That's all theoretical, of course - I've never built a datacentre network so I don't know how it's done in practice. > > - Jonathan Morton > BTW, there is one very common case we all share that will benefit: Think your home network: you have 1GE (or maybe only 100MBPS) Ethernet to your other machines... What is more, consumer ethernet switches do do flow control, whether you want them to or not. So you routinely have queues build, even on ethernet, in those environments today. Jim (I wasn't even aware of Ethernet flow control's existence until 2 years ago, when I wanted to understand the funny frames wireshark was reporting on my home network). Then I read up on it...