<font face="times new roman" size="2"><p style="margin:0;padding:0;">Re: "the only thing that counts is peak throughput" - it's a pretty cynical stance to say "I'm a professional engineer, but the marketing guys don't have a clue, so I'm not going to build a usable system".</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">It's even worse when fellow engineers *disparage* or downplay the work of engineers who are actually trying hard to fix this across the entire Internet.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">Does competition require such foolishness? Have any of the folks who work for operators and equipment suppliers followed Richard Woundy's lead (he is SVP at Comcast) and tried to *fix* the problem and get the fix deployed. Richard is an engineer, and took the time to develop a proposed fix to DOCSIS 3.0, and also to write a "best practices" document about how to deploy that fix. The one thing he could not do is get Comcast or its competitors to invest money in deploying the fix more rapidly.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">First, it's important to measure the "right thing" - which in this case is "how much queueing *delay* builds up in the bottleneck link under load" and how bad is the user experience when that queueing delay stabilizes at more than about 20 msec.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">That cannot be determined by measuring throughput, which is all the operators measure. (I have the sworn testimony of every provider in Canada when asked by the CRTC "do you measure latency on your internet service", the answer was uniformly - we measure throughput *only*, and by Little's Lemma we can determine latency).</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">Engineers actually have a positive duty to society, not just to profits. And actually, in this case, better service *would* lead to more profits! Not directly, but because there is competition for experience, even more than for "bitrate", despite the claims of engineers.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">So talk to your CEO's. When I've done so, they say they have *never* heard of the issue. Maybe that's due to denial throughout the organization.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">(by the way, what woke Comcast up was getting hauled in front of the FCC for deploying DPI-based RST injection that disrupted large classes of connections - because they had not realized what their problem was, and the marketers wanted to blame "pirates" for clogging the circuits - for which claim they had no data other than self-serving and proprietary "studies" from the vendors like Sandvine and Ellacoya).</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">Actual measurements of actual network behavior revealed the bufferbloat phenomenon was the cause of disruptive events due to load in *every* case observed by me, and I've looked at a lot. It used to happen on Frame Relay links all the time, and in datacenter TCP/IP internal deployments.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">So measure first. Measure the right thing (latency growth under load). Ask "why is this happening?" and don't jump to the non sequitur (pirates or "interference") without proving that the non sequitur actually explains the entire phenomenon (something Comcast failed to do, instead reasoning from anecdotal links between bittorrent and the problem.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">And then when your measurements are right, and you can demonstrate a solution that *works* (rather than something that in academia would be an "interesting Ph.D. proposal"), then deploy it and monitor it.</p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;"> </p>
<p style="margin:0;padding:0;">-----Original Message-----<br />From: "Ingemar Johansson S" <ingemar.s.johansson@ericsson.com><br />Sent: Tuesday, January 8, 2013 8:19am<br />To: "Keith Winstein" <keithw@mit.edu><br />Cc: "mallman@icir.org" <mallman@icir.org>, "end2end-interest@postel.org" <end2end-interest@postel.org>, "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net><br />Subject: Re: [e2e] bufferbloat paper<br /><br /></p>
<div id="SafeStyles1357657916">
<p style="margin:0;padding:0;">OK...<br /><br />Likely means that AQM is not turned on in the eNodeB, can't be 100% sure though but it seems so.<br />At least one company I know of offers AQM in eNodeB. However one problem seems to be that the only thing that counts is peak throughput, you have probably too seen these "up to X Mbps" slogans. Competition is fierce snd for this reason it could be tempting to turn off AQM as it may reduce peak throughput slightly. I know and most people on these mailing lists knows that peak throughput is the "mexapixels" of the internet, one need to address other aspects in the benchmarks.<br /><br />/Ingemar<br /><br /><br />> -----Original Message-----<br />> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith<br />> Winstein<br />> Sent: den 8 januari 2013 13:44<br />> To: Ingemar Johansson S<br />> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;<br />> mallman@icir.org<br />> Subject: Re: [e2e] bufferbloat paper<br />> <br />> Hello Ingemar,<br />> <br />> Thanks for your feedback and your own graph.<br />> <br />> This is testing the LTE downlink, not the uplink. It was a TCP download.<br />> <br />> There was zero packet loss on the ICMP pings. I did not measure the TCP<br />> flow itself but I suspect packet loss was minimal if not also zero.<br />> <br />> Best,<br />> Keith<br />> <br />> On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S<br />> <ingemar.s.johansson@ericsson.com> wrote:<br />> > Hi<br />> ><br />> > Interesting graph, thanks for sharing it.<br />> > It is likely that the delay is only limited by TCPs maximum congestion<br />> window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s,<br />> giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at<br />> other time instants seems to give a similar figure.<br />> > Do you see any packet loss ?<br />> ><br />> > The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as<br />> the packets are buffered there.<br />> > The eNodeB does not buffer up packets in UL* so I would in this particular<br />> case argue that the problem is best solved in the terminal.<br />> > Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing<br />> that is standardized also I cannot tell how feasible it is.<br />> ><br />> > /Ingemar<br />> ><br />> > BTW... UL = uplink<br />> > * RLC-AM retransmissions can be said to cause delay in the eNodeB but<br />> then again the main problem is that packets are being queued up in the<br />> terminals sendbuffer. The MAC layer HARQ can too cause some delay but<br />> this is a necessity to get an optimal performance for LTE, moreover the<br />> added delay due to HARQ reTx is marginal in this context.<br />> ><br />> >> -----Original Message-----<br />> >> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of<br />> >> Keith Winstein<br />> >> Sent: den 8 januari 2013 11:42<br />> >> To: Ingemar Johansson S<br />> >> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;<br />> >> mallman@icir.org<br />> >> Subject: Re: [e2e] bufferbloat paper<br />> >><br />> >> I'm sorry to report that the problem is not (in practice) better on<br />> >> LTE, even though the standard may support features that could be used<br />> >> to mitigate the problem.<br />> >><br />> >> Here is a plot (also at<br />> >> http://web.mit.edu/keithw/www/verizondown.png)<br />> >> from a computer tethered to a Samsung Galaxy Nexus running Android<br />> >> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.<br />> >><br />> >> The phone was stationary during the test and had four bars (a full<br />> >> signal) of "4G" service. The computer ran a single full-throttle TCP<br />> >> CUBIC download from one well-connected but unremarkable Linux host<br />> >> (ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the<br />> >> same tethered LTE interface. There were zero lost pings during the<br />> >> entire test<br />> >> (606/606 delivered).<br />> >><br />> >> The RTT grows to 1-2 seconds and stays stable in that region for most<br />> >> of the test, except for one 12-second period of >5 seconds RTT. We<br />> >> have also tried measuring only "one-way delay" (instead of RTT) by<br />> >> sending UDP datagrams out of the computer's Ethernet interface over<br />> >> the Internet, over LTE to the cell phone and back to the originating<br />> >> computer via USB tethering. This gives similar results to ICMP ping.<br />> >><br />> >> I don't doubt that the carriers could implement reasonable AQM or<br />> >> even a smaller buffer at the head-end, or that the phone could<br />> >> implement AQM for the uplink. For that matter I'm not sure the details of<br />> the air interface (LTE vs.<br />> >> UMTS vs. 1xEV-DO) necessarily makes a difference here.<br />> >><br />> >> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in<br />> >> Eastern Massachusetts, the carrier is willing to queue and hold on to<br />> >> packets for >1 second. Even a single long-running TCP download (>15<br />> >> megabytes) is enough to tickle this problem.<br />> >><br />> >> In the CCR paper, even flows >1 megabyte were almost nonexistent,<br />> >> which may be part of how these findings are compatible.<br />> >><br />> >> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S<br />> >> <ingemar.s.johansson@ericsson.com> wrote:<br />> >> > Hi<br />> >> ><br />> >> > Include Mark's original post (below) as it was scrubbed<br />> >> ><br />> >> > I don't have an data of bufferbloat for wireline access and the<br />> >> > fiber<br />> >> connection that I have at home shows little evidence of bufferbloat.<br />> >> ><br />> >> > Wireless access seems to be a different story though.<br />> >> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks"<br />> >> > by Jiang et al. I decided to make a few measurements of my own<br />> >> > (hope that the attached png is not removed)<br />> >> ><br />> >> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04<br />> >> with a 3G modem attached.<br />> >> > The throughput was computed from the wireshark logs and RTT was<br />> >> measured with ping (towards a webserver hosted by Akamai). The<br />> >> location is LuleƄ city centre, Sweden (fixed locations) and the<br />> >> measurement was made at lunchtime on Dec 6 2012 .<br />> >> ><br />> >> > During the measurement session I did some close to normal websurf,<br />> >> including watching embedded videoclips and youtube. In some cases the<br />> >> effects of bufferbloat was clearly noticeable.<br />> >> > Admit that this is just one sample, a more elaborate study with<br />> >> > more<br />> >> samples would be interesting to see.<br />> >> ><br />> >> > 3G has the interesting feature that packets are very seldom lost in<br />> >> downlink (data going to the terminal). I did not see a single packet<br />> >> loss in this test!. I wont elaborate on the reasons in this email.<br />> >> > I would however believe that LTE is better off in this respect as<br />> >> > long as<br />> >> AQM is implemented, mainly because LTE is a packet-switched<br />> architecture.<br />> >> ><br />> >> > /Ingemar<br />> >> ><br />> >> > Marks post.<br />> >> > ********<br />> >> > [I tried to post this in a couple places to ensure I hit folks who<br />> >> > would be interested. If you end up with multiple copies of the<br />> >> > email, my apologies. --allman]<br />> >> ><br />> >> > I know bufferbloat has been an interest of lots of folks recently.<br />> >> > So, I thought I'd flog a recent paper that presents a little data<br />> >> > on the topic ...<br />> >> ><br />> >> > Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer<br />> >> > Communication Review, 43(1), January 2013.<br />> >> > http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf<br />> >> ><br />> >> > Its an initial paper. I think more data would be great!<br />> >> ><br />> >> > allman<br />> >> ><br />> >> ><br />> >> > --<br />> >> > http://www.icir.org/mallman/<br />> >> ><br />> >> ><br />> >> ><br />> >> ><br /><br /></p>
</div></font>