[Bloat] [e2e] bufferbloat paper

Ingemar Johansson S ingemar.s.johansson at ericsson.com
Tue Jan 8 07:19:17 EST 2013


Hi

Interesting graph, thanks for sharing it.
It is likely that the delay is only limited by TCPs maximum congestion window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s, giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at other time instants seems to give a similar figure. 
Do you see any packet loss ?

The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as the packets are buffered there. 
The eNodeB does not buffer up packets in UL* so I would in this particular case argue that the problem is best solved in the terminal.
Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing that is standardized also I cannot tell how feasible it is.

/Ingemar

BTW... UL = uplink
* RLC-AM retransmissions can be said to cause delay in the eNodeB but then again the main problem is that packets are being queued up in the terminals sendbuffer. The MAC layer HARQ can too cause some delay but this is a necessity to get an optimal performance for LTE, moreover the added delay due to HARQ reTx is marginal in this context.

> -----Original Message-----
> From: winstein at gmail.com [mailto:winstein at gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 11:42
> To: Ingemar Johansson S
> Cc: end2end-interest at postel.org; bloat at lists.bufferbloat.net;
> mallman at icir.org
> Subject: Re: [e2e] bufferbloat paper
> 
> I'm sorry to report that the problem is not (in practice) better on LTE, even
> though the standard may support features that could be used to mitigate the
> problem.
> 
> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
> from a computer tethered to a Samsung Galaxy Nexus running Android
> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
> 
> The phone was stationary during the test and had four bars (a full
> signal) of "4G" service. The computer ran a single full-throttle TCP CUBIC
> download from one well-connected but unremarkable Linux host (ssh
> hostname 'cat /dev/urandom') while pinging at 4 Hz across the same
> tethered LTE interface. There were zero lost pings during the entire test
> (606/606 delivered).
> 
> The RTT grows to 1-2 seconds and stays stable in that region for most of the
> test, except for one 12-second period of >5 seconds RTT. We have also tried
> measuring only "one-way delay" (instead of RTT) by sending UDP datagrams
> out of the computer's Ethernet interface over the Internet, over LTE to the
> cell phone and back to the originating computer via USB tethering. This gives
> similar results to ICMP ping.
> 
> I don't doubt that the carriers could implement reasonable AQM or even a
> smaller buffer at the head-end, or that the phone could implement AQM for
> the uplink. For that matter I'm not sure the details of the air interface (LTE vs.
> UMTS vs. 1xEV-DO) necessarily makes a difference here.
> 
> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in Eastern
> Massachusetts, the carrier is willing to queue and hold on to packets for >1
> second. Even a single long-running TCP download (>15
> megabytes) is enough to tickle this problem.
> 
> In the CCR paper, even flows >1 megabyte were almost nonexistent, which
> may be part of how these findings are compatible.
> 
> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> <ingemar.s.johansson at ericsson.com> wrote:
> > Hi
> >
> > Include Mark's original post (below) as it was scrubbed
> >
> > I don't have an data of bufferbloat for wireline access and the fiber
> connection that I have at home shows little evidence of bufferbloat.
> >
> > Wireless access seems to be a different story though.
> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by
> > Jiang et al. I decided to make a few measurements of my own (hope that
> > the attached png is not removed)
> >
> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> with a 3G modem attached.
> > The throughput was computed from the wireshark logs and RTT was
> measured with ping (towards a webserver hosted by Akamai). The location is
> LuleƄ city centre, Sweden (fixed locations) and the measurement was made
> at lunchtime on Dec 6 2012 .
> >
> > During the measurement session I did some close to normal websurf,
> including watching embedded videoclips and youtube. In some cases the
> effects of bufferbloat was clearly noticeable.
> > Admit that this is just one sample, a more elaborate study with more
> samples would be interesting to see.
> >
> > 3G has the interesting feature that packets are very seldom lost in
> downlink (data going to the terminal). I did not see a single packet loss in this
> test!. I wont elaborate on the reasons in this email.
> > I would however believe that LTE is better off in this respect as long as
> AQM is implemented, mainly because LTE is a packet-switched architecture.
> >
> > /Ingemar
> >
> > Marks post.
> > ********
> > [I tried to post this in a couple places to ensure I hit folks who
> > would  be interested.  If you end up with multiple copies of the
> > email, my  apologies.  --allman]
> >
> > I know bufferbloat has been an interest of lots of folks recently.
> > So, I thought I'd flog a recent paper that presents a little data on
> > the topic ...
> >
> >     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
> >     Communication Review, 43(1), January 2013.
> >     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >
> > Its an initial paper.  I think more data would be great!
> >
> > allman
> >
> >
> > --
> > http://www.icir.org/mallman/
> >
> >
> >
> >



More information about the Bloat mailing list