From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-tul01m020-f171.google.com (mail-tul01m020-f171.google.com [209.85.214.171]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 2A0CA201901 for ; Tue, 10 Jan 2012 23:26:08 -0800 (PST) Received: by obbwd20 with SMTP id wd20so974640obb.16 for ; Tue, 10 Jan 2012 23:26:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=VTQ+mX76tls8DuakgRVYluo/BJIKKo9WRxUFEpxa7Zk=; b=ewa+1kDsOwZBNZl6i5lANF7zarAR5o1lNCm4FUjeiTVUFNMYpXoFjyonNBo+NRB0Ub rx+LU524w+xTD/7qZxjl3McAjbx2WaTLUrs+PBuChrk1LZW95srTQItR817sE9O2MXoI dmOFV49r2Gna5zmf+HsqeJtLed+g1crcngUMk= MIME-Version: 1.0 Received: by 10.50.182.130 with SMTP id ee2mr5772837igc.30.1326266767021; Tue, 10 Jan 2012 23:26:07 -0800 (PST) Received: by 10.231.159.193 with HTTP; Tue, 10 Jan 2012 23:26:06 -0800 (PST) In-Reply-To: <201201090538.q095cYa4031441@bagheera.jungle.bt.co.uk> References: <1325481751.2526.23.camel@edumazet-laptop> <4F046F7B.6030905@freedesktop.org> <201201051753.q05Hqx78012678@bagheera.jungle.bt.co.uk> <201201090538.q095cYa4031441@bagheera.jungle.bt.co.uk> Date: Wed, 11 Jan 2012 08:26:06 +0100 Message-ID: From: Dave Taht To: Bob Briscoe Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: bloat Subject: Re: [Bloat] What is fairness, anyway? was: Re: finally... winning on wired! X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jan 2012 07:26:08 -0000 On Mon, Jan 9, 2012 at 6:38 AM, Bob Briscoe wrote: > Dave, > You're conflating removal of standing queues with bandwidth allocation. T= he > former is a problem in HGs and hosts. The latter isn't a problem in HGs a= nd > hosts. I've been trying to understand your point here more fully. 1) removal of standing queues is a problem in HGs and hosts On the host side, the standing queue problem is something that happens ofte= n in wireless in particular. Additionally you ship packets around in aggregates, and those aggregates can be delayed, lost, or rescheduled. FQ reduces the damage done when packets are being bulk shipped in this way. http://www.teklibre.com/~d/bloat/ping_log.ps (This graph also shows that the uncontrolled device driver queue depth totals about 50ms in this case) In the benchmarks I've been running against wireless, even on fairly light loads, FQ reduces bursty packet loss, tcp resets, and the like. Statistical= ly it's difficult to 'see', and I'm trying to come up with better methods to d= o so besides double-blind A/B testing and *most importantly* trying to convince more people to discard their biases and actually try the code. Or take a look at some packet captures. As for the AP side, you have both a bandwidth allocation and FQ problem with wireless, compounded by the packet aggregation problem. Still a big problem in either wireless case is a need to expire old packets= and manage the depth of the queue based on the actual bandwidth between two devices actually available at that instant of time. Otherwise you get nonsense like 10+ second ping times. So as for managing the the overall length of the standing queues, conventional AQM techniques, such as RED, blue, etc... apply but as for coping with the bursty nature of wireless in particular (and TSO'd streams) FQ helps break up the container-loads into manageable pieces. 2) Bandwidth allocation isn't a problem in HGs and hosts. On hosts, on wired, it is not a problem. On wireless, see above. On home gateways, which run uplinks at anywhere between 128KB/sec in parts of the world, to 1Mbit in others, & 4Mbit fairly typical on cable, it's a huge problem. Regardless of any level of queue management as applied above (fq and aqm), the only decent way known to deal with 'bufferbloat' on bad devices beyond the HG, is to limit your own bandwidth to what you've measured as below what the messed up edge provider is providing.... and manage it from there across the needs of your site, where various AQM and FQ technologies can make a dent in your own problems. So perhaps I misunderstood your point here? Certainly the use model of the internet has changed significantly and TCP is in dire need of viable complementary protocols such as ledbat, etc. I also happen to like hipl, and enjoy but am befuddled by the ccn work on-going. And certainly I look forward to seeing less edge devices misbehaving with excessive buffering and lack of AQM. I'd like in particular, a contractual model - I mean, you are typically buy= ing 'x bandwidth' as part of your ISP contract - made available to correctly, and automatically provision downstream devices= . something as simple as a extension to dhcp would nice, or something like parsable data for 'http://whatsmydarnbandwidth.myisp.com' would help. Having to infer the available bandwidth and amount of buffering with tools such as shaperprobe is useful but a poor second to a contractual model for a baseline and tighter feedback loops for ongoing management. --=20 Dave T=E4ht SKYPE: davetaht US Tel: 1-239-829-5608 FR Tel: 0638645374 http://www.bufferbloat.net