From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-x235.google.com (mail-ob0-x235.google.com [IPv6:2607:f8b0:4003:c01::235]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 4090721F1A1 for ; Wed, 25 Feb 2015 09:57:55 -0800 (PST) Received: by mail-ob0-f181.google.com with SMTP id vb8so5437857obc.12 for ; Wed, 25 Feb 2015 09:57:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=Uobh9NmNIMZVD/WrHHDW2gH3smJ+vlQifU4QyI/8sss=; b=Tk3MQiNpHndrV9SWOENc45m7w/+ul1kNaM8cGhyWUd2IVG+p5GH8YJ33JZ5RPXQfdd 620Yn2OvsdVRt6vpKIHZGdJu9TF3Cbz6moLSh0+kTF/UYZfe6rsaPot4eWKGa9NmtBT8 gDDo/Hmqkvkg0gj6sxEP3oHcL3emz55fUA9Wwml9+GlUXJME5TUbrH/PLEwh5nilTIsa 1GJBP1vTmPZdHOOfly438oyvyZS91qnCIWoOgihbgJf0ZXZ5ducB6yGDCIJRquZJsD3f G1PzlJB4lKRYAm/P0o3vLqp9uXO53bi/THPjQ6l38LJTY2spVJxvqLIq/IrNq+uUIb5Z ISpA== MIME-Version: 1.0 X-Received: by 10.182.98.164 with SMTP id ej4mr3209596obb.0.1424887074968; Wed, 25 Feb 2015 09:57:54 -0800 (PST) Received: by 10.202.51.66 with HTTP; Wed, 25 Feb 2015 09:57:54 -0800 (PST) In-Reply-To: <201502250806.t1P86o5N011632@bagheera.jungle.bt.co.uk> References: <201502250806.t1P86o5N011632@bagheera.jungle.bt.co.uk> Date: Wed, 25 Feb 2015 09:57:54 -0800 Message-ID: From: Dave Taht To: Bob Briscoe Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: bloat Subject: Re: [Bloat] RED against bufferbloat X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Feb 2015 17:58:24 -0000 On Wed, Feb 25, 2015 at 12:06 AM, Bob Briscoe wrote: > Sahil, > > At 06:46 25/02/2015, Mikael Abrahamsson wrote: >> >> On Tue, 24 Feb 2015, sahil grover wrote: >> >>> (i) First of all,i want to know whether RED was implemented or not? >>> if not then what were the reasons(major) ? >> >> >> RED has been available on most platforms, but it was generally not turne= d >> on. It also needs configuration from an operator, and it's hard to know = how >> to configure. > > > About a decade ago my company (BT) widely deployed RED in the upstream > 'head-end' of our global MPLS network, i.e. the likely bottleneck in the > customer edge router where the customer's LAN traffic enters their access > link. We deployed it as WRED, i.e. different configurations of RED across > the various diffserv classes, in order to minimise queuing latency in all > the classes, including the lowest priority class. A configuration calcula= tor > was developed to help the engineers during set up. We still use this setu= p > successfuly today, including for all our particularly latency sensitive > customers in the finance sector. Thank you for more public detail on this than you made available before. I note note that free.fr deployed SFQ also about a decade ago. And DRR+fq_codel 3 years ago. And I am told that SQF deployed also in various places. I remain bugged I have not managed to see a study that actually looked for FQ=C2=B4s characteristic signatures on data at scale. In polling various ISPs at various conferences through the world - I realize this is unscientific - uknof had 3? 4? RED users - and it was about 30% HFSC + SFQ for downstream shaping among the rest of the room, and about 120 in the room. nznog (about 150 in the room) - 0 RED, 1/3 "some form of FQ". Certainly in their world of highly mixed RTTs on and off the island I can see why FQ was preferred. I also found from poking on their networks that they had an unbelievable amount of bad policers configured, with one test showing it kicking in with about 3ms of buffering. That was what decided me that I should probably get around to finishing the kinder/gentler "bobbie" policer - not that I have any hope it would deploy where needed in under 5 years. > We did not deploy RED on our broadband platform (ie public Internet), alt= ho > in retrospect we should have done, because any AQM is much better than no= ne. > We're fixing that now. Applause. I have always said, that if you can figure out how to deploy RED, do so, and I believe I did pester the universe for better tools to figure out how to better automagically configure it. >>> (ii)Second, as we all know RED controls the average queue size from >>> growing. >>> So it also controls delay in a way or we can say is a solution to >>> bufferbloat problem. Then why it was not considered. >> >> >> It was designed to fix "bufferbloat" long before the bufferbloat word wa= s >> even invented. It's just that in practice, it doesn't work very well. RE= D is >> configured with a drop probability slope at certain buffer depths, and >> that's it. It doesn't react or change depending on conditions. You have = to >> guess at configure-time. >> >> What we need are mechanisms that work better in real life and that are >> adaptive. > > > If you were prepared to read a paper, I would have suggested: > "The New AQM Kids on the Block: An Experimental Evaluation of CoDel and P= IE" > > > This compares CoDel and PIE against Adaptive RED, which was a variant of = RED > proposed by Sally Floyd & co-authors in 2001 and available since Linux > kernel version 3.3. ARED addressed the configuration sensitivity problem = of > RED by adapting the parameters to link rate and load conditions. > > The paper convinced me that ARED is good enough (in the paper's simulatio= ns > it was often better than PIE or CoDel), at least for links with fixed rat= e > (or only occasionally varying rate like DSL).* This is important for us > because it means we can consider deploying AQM by adding soft controls on > top of the RED implementations we already have in existing equipment. Thi= s > could reduce deployment completion time from decades to a few months. ARED is easier to configure, but I gotta admit, after having lunch and multiple conversations with the authors 2 years back that I had expected the follow-on work addressing my concerns to come out long before now. I felt that paper was massively flawed on multiple accounts and while I see it has now been pimped into multiple publications... (which I now understand takes a lot of work, far more than doing the experiments themselves...) Nobody seems to have noticed that all it tested was: "More realistic traffic types (here, only bulk TCP traffic) including bursty traffic" And that doing sane analysis of more latency sensitive traffic - like web, webrtc, voip, etc - under load in either direction - what I am perpetually harping on - was "reserved for future work. " I am still waiting for competent papers tackling that , and increasingly bugged by the stream of papers that still don=C2=B4t address the problem or repeat the experiments that kicked off the whole bufferbloat effort in the first place. (and thus I have two rants on the codel list from yesterday). Thankfully - I have had a chance to review a few papers prepublication in the last few months and rejected them for a variety of truly egregious flaws... so nobody else has to suffer through them. I like toke=C2=B4s stuff FQ vs AQM work much better in comparison - which does tackle those subjects. FQ wins over AQM alone, FQ+AQM reduces the impact of hash collisions. He has an enormous backing set of data that wont fit into less than 5 papers, and sadly, the first core paper - needed for all the follow-ons covering things like ECN, or improvements to TCP - hasn't found publication outside the ietf as yet - although he did get a nice tour of stanford, caltech, google, and ucsb out of it, and got the stanford and google talks filmed - and I think got a couple changes to TCP made out of it... and there is still a pending improvement or two to codel... so even unpublished it has made more impact than most of the published stuff, combined. At caltech, Kleinrock was there (toke: "what a nice guy!") and kleinrock listened to the preso... went back to his office, pulled out a tattered blue mimeographed sheet from 1962 or so talking about generalized processor sharing, and said: "Yup. Only took 50 years..." > * I'm not sure ARED would be able to cope with the rapidly changing rate = of > a wireless link tho. It isn=C2=B4t. Neither are codel or pie (unmodified), as they work on packets not aggregates and TXOPs, nor are sensitive to retries, and other forms of jitter on a wifi link, like powersave. We have made a LOT of progress on fixing wifi lately on a variety of fronts and have most of the infrastructure architecturally in place to massively improve things there on the only two chipsets we can fix (mt72, ath9k). (please note there is so much else wrong in wifi that bufferbloat is only a piece of the overall list of things we plan to fix) The first bit of code for that has landed for review in linux-wireless - It is called minstrel-blues which does "coupled rate control and tx power management" - and it is *wonderful*. http://www.linuxplumbersconf.net/2014/ocw//system/presentations/2439/origin= al/Minstrel-Blues%20@LinuxPlumbers%202014.pdf There is no need to transmit at the highest power if your device is right next to the AP. We got some pretty good test results back from it last weekend. There is another fix to minstrel andrew just did that improves tcp performance by - well, still measuring - it looks GREAT. There are about 6 more things coming up for wifi over the next few months... which includes a modified version of a fq_codel like algorithm - so long as more funding lands... more people show up to help... and I stop paying attention to email. I am done talking, I am off doing. > HTH > > > Bob > > > > ________________________________________________________________ > Bob Briscoe, BT > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat --=20 Dave T=C3=A4ht Let's make wifi fast, less jittery and reliable again! https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb