From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 1869021F38A; Fri, 20 Mar 2015 03:07:30 -0700 (PDT) Received: from u-089-d066.biologie.uni-tuebingen.de ([134.2.89.66]) by mail.gmx.com (mrgmx101) with ESMTPSA (Nemesis) id 0LkTSx-1Z4wfC3LQ3-00cQwc; Fri, 20 Mar 2015 11:07:21 +0100 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) From: Sebastian Moeller In-Reply-To: Date: Fri, 20 Mar 2015 11:07:19 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <122EA6FF-6C05-4137-8574-D91A93FF729C@gmx.de> References: <20150316203532.05BD21E2@taggart.lackof.org> <123130.1426635142@turing-police.cc.vt.edu> <15A0911A-E3B7-440A-A26B-C5E1489EA98B@viagenie.ca> <1426773234.362612992@apps.rackspace.com> <1426796961.194223197@apps.rackspace.com> To: Greg White X-Mailer: Apple Mail (2.1878.6) X-Provags-ID: V03:K0:2rg+w54db77JjMTfcC5EGncEHlt3quznk0vqzVXnKqL35SlJcCz rvYXwCrMsMUwv/lC060Z8x+HH8AalYVKcU0oW49y6UBU2q5cu+ES+lK/f5lw+SIE9XhE8hM bsYF0MN2xuZ/obLwYlp+ZmhOjXt5tGVKYcrfCUZ86Bajmid8XmFNmATUjLD1wJLrmK6PBV+ Su7AERAdHpFBS2iShxmhw== X-UI-Out-Filterresults: notjunk:1; Cc: "Livingood, Jason" , "cerowrt-devel@lists.bufferbloat.net" , bloat Subject: Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation? X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Mar 2015 10:07:59 -0000 Hi All, I guess I have nothing to say that most of you don=92t know already, = but... On Mar 20, 2015, at 00:18 , Greg White wrote: > Netalyzr is great for network geeks, hardly consumer-friendly, and = even so > the "network buffer measurements" part is buried in 150 other = statistics. The bigger issue with netalyzr is that it is a worst case probe = with an unrelenting UDP =93flood=94 that does not measure the = =93responsiveness/latency=94 of unrelated flows concurrently. In all = fairness it not even tests the worst case as it floods up- and downlink = sequentially and it seems to use the same port for all packets. This = kind of traffic is well suited to measure the worst case buffering for = misbehaving ((D)DOS) flows, not necessarily the amount of effective = buffering well behaved flows encounter. And then the help text related to =93network buffer = measurements=94 section in the results report seems to be actually = misleading in that the used DOS traffic is assumed to be representative = of normal traffic (also it does not allow for AQMs that manage normal = responsive traffic better). It would be so sweet, if they could also measure the ICMP RTT = (or another type of timestamped tcp or udp flow) to say a well connected = CDN concurrently to give a first approximation about the effect of link = saturation on other competing flows; and then report the amount of = change in that number caused by link saturation as the actual indicator = of effective buffering... > Why couldn't Ookla* add a simultaneous "ping" test to their throughput > test? When was the last time someone leaned on them? >=20 >=20 > *I realize not everyone likes the Ookla tool, but it is popular and = about > as "sexy" as you are going to get with a network performance tool. I think you are right; instead of trying to get better tools out = we might have a better chance of getting small modifications into = existing tools. Best Regards Sebastian >=20 > -Greg >=20 >=20 >=20 > On 3/19/15, 2:29 PM, "dpreed@reed.com" wrote: >=20 >> I do think engineers operating networks get it, and that Comcast's >> engineers really get it, as I clarified in my followup note. >>=20 >> The issue is indeed prioritization of investment, engineering = resources >> and management attention. The teams at Comcast in the engineering = side >> have been the leaders in "bufferbloat minimizing" work, and I think = they >> should get more recognition for that. >>=20 >> I disagree a little bit about not having a test that shows the issue, = and >> the value the test would have in demonstrating the issue to users. >> Netalyzer has been doing an amazing job on this since before the >> bufferbloat term was invented. Every time I've talked about this = issue >> I've suggested running Netalyzer, so I have a personal set of = comments >> from people all over the world who run Netalyzer on their home = networks, >> on hotel networks, etc. >>=20 >> When I have brought up these measurements from Netalyzr (which are = not >> aimed at showing the problem as users experience) I observe an >> interesting reaction from many industry insiders: the results are = not >> "sexy enough for stupid users" and also "no one will care". >>=20 >> I think the reaction characterizes the problem correctly - but the = second >> part is the most serious objection. People don't need a measurement >> tool, they need to know that this is why their home network sucks >> sometimes. >>=20 >>=20 >>=20 >>=20 >>=20 >> On Thursday, March 19, 2015 3:58pm, "Livingood, Jason" >> said: >>=20 >>> On 3/19/15, 1:11 PM, "Dave Taht" wrote: >>>=20 >>>> On Thu, Mar 19, 2015 at 6:53 AM, wrote: >>>>> How many years has it been since Comcast said they were going to = fix >>>>> bufferbloat in their network within a year? >>>=20 >>> I=B9m not sure anyone ever said it=B9d take a year. If someone did = (even if >>> it >>> was me) then it was in the days when the problem appeared less >>> complicated >>> than it is and I apologize for that. Let=B9s face it - the problem = is >>> complex and the software that has to be fixed is everywhere. As I = said >>> about IPv6: if it were easy, it=B9d be done by now. ;-) >>>=20 >>>>> It's almost as if the cable companies don't want OTT video or >>>>> simultaneous FTP and interactive gaming to work. Of course not. = They'd >>>>> never do that. >>>=20 >>> Sorry, but that seems a bit unfair. It flies in the face of what we = have >>> done and are doing. We=B9ve underwritten some of Dave=B9s work, we = got >>> CableLabs to underwrite AQM work, and I personally pushed like heck = to >>> get >>> AQM built into the default D3.1 spec (had CTO-level awareness & = support, >>> and was due to Greg White=B9s work at CableLabs). We are starting to = field >>> test D3.1 gear now, by the way. We made some bad bets too, such as >>> trying >>> to underwrite an OpenWRT-related program with ISC, but not every = tactic >>> will always be a winner. >>>=20 >>> As for existing D3.0 gear, it=B9s not for lack of trying. Has any = DOCSIS >>> network of any scale in the world solved it? If so, I have something = to >>> use to learn from and apply here at Comcast - and I=B9d **love** an >>> introduction to someone who has so I can get this info. >>>=20 >>> But usually there are rational explanations for why something is = still >>> not >>> done. One of them is that the at-scale operational issues are more >>> complicated that some people realize. And there is always a case of >>> prioritization - meaning things like running out of IPv4 addresses = and >>> not >>> having service trump more subtle things like buffer bloat (and the >>> effort >>> to get vendors to support v6 has been tremendous). >>>=20 >>>> I do understand there are strong forces against us, especially in = the >>>> USA. >>>=20 >>> I=B9m not sure there are any forces against this issue. It=B9s more = a >>> question >>> of awareness - it is not apparent it is more urgent than other work = in >>> everyone=B9s backlog. For example, the number of ISP customers even = aware >>> of >>> buffer bloat is probably 0.001%; if customers aren=B9t asking for = it, the >>> product managers have a tough time arguing to prioritize buffer = bloat >>> work >>> over new feature X or Y. >>>=20 >>> One suggestion I have made to increase awareness is that there be a >>> nice, >>> web-based, consumer-friendly latency under load / bloat test that = you >>> could get people to run as they do speed tests today. (If someone = thinks >>> they can actually deliver this, I will try to fund it - ping me >>> off-list.) >>> I also think a better job can be done explaining buffer bloat - it=B9s= >>> hard >>> to make an =8Celevator pitch=B9 about it. >>>=20 >>> It reminds me a bit of IPv6 several years ago. Rather than saying in >>> essence =8Cyou operators are dummies=B9 for not already fixing this, = maybe >>> assume the engineers all =8Cget it=B9 and what to do it. Because we = really >>> do >>> get it and want to do something about it. Then ask those operators = what >>> they need to convince their leadership and their suppliers and = product >>> managers and whomever else that it needs to be resourced more >>> effectively >>> (see above for example). >>>=20 >>> We=B9re at least part of the way there in DOCSIS networks. It is in = D3.1 >>> by >>> default, and we=B9re starting trials now. And probably within 18-24 = months >>> we won=B9t buy any DOCSIS CPE that is not 3.1. >>>=20 >>> The question for me is how and when to address it in DOCSIS 3.0. >>>=20 >>> - Jason >>>=20 >>>=20 >>>=20 >>>=20 >>=20 >>=20 >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >=20 > _______________________________________________ > Cerowrt-devel mailing list > Cerowrt-devel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/cerowrt-devel