General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* Re: [Bloat] bufferbloat paper
@ 2013-01-08  7:35 Ingemar Johansson S
  2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
                   ` (2 more replies)
  0 siblings, 3 replies; 43+ messages in thread
From: Ingemar Johansson S @ 2013-01-08  7:35 UTC (permalink / raw)
  To: end2end-interest, bloat

[-- Attachment #1: Type: text/plain, Size: 2083 bytes --]

Hi

Include Mark's original post (below) as it was scrubbed

I don't have an data of bufferbloat for wireline access and the fiber connection that I have at home shows little evidence of bufferbloat.

Wireless access seems to be a different story though.
After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang et al. I decided to make a few measurements of my own (hope that the attached png is not removed) 

The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a 3G modem attached. 
The throughput was computed from the wireshark logs and RTT was measured with ping (towards a webserver hosted by Akamai). The location is Luleå city centre, Sweden (fixed locations) and the measurement was made at lunchtime on Dec 6 2012 . 

During the measurement session I did some close to normal websurf, including watching embedded videoclips and youtube. In some cases the effects of bufferbloat was clearly noticeable. 
Admit that this is just one sample, a more elaborate study with more samples would be interesting to see.

3G has the interesting feature that packets are very seldom lost in downlink (data going to the terminal). I did not see a single packet loss in this test!. I wont elaborate on the reasons in this email.
I would however believe that LTE is better off in this respect as long as AQM is implemented, mainly because LTE is a packet-switched architecture.
 
/Ingemar

Marks post.
********
[I tried to post this in a couple places to ensure I hit folks who would
 be interested.  If you end up with multiple copies of the email, my
 apologies.  --allman]

I know bufferbloat has been an interest of lots of folks recently.  So,
I thought I'd flog a recent paper that presents a little data on the
topic ...

    Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
    Communication Review, 43(1), January 2013.
    http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf

Its an initial paper.  I think more data would be great!

allman


--
http://www.icir.org/mallman/





[-- Attachment #2: Bufferbloat-3G.png --]
[-- Type: image/png, Size: 267208 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08  7:35 [Bloat] bufferbloat paper Ingemar Johansson S
@ 2013-01-08 10:42 ` Keith Winstein
  2013-01-08 12:19   ` Ingemar Johansson S
  2013-01-09 14:07   ` Michael Richardson
  2013-01-08 15:04 ` dpreed
  2013-01-18 22:00 ` [Bloat] " Haiqing Jiang
  2 siblings, 2 replies; 43+ messages in thread
From: Keith Winstein @ 2013-01-08 10:42 UTC (permalink / raw)
  To: Ingemar Johansson S; +Cc: mallman, end2end-interest, bloat

[-- Attachment #1: Type: text/plain, Size: 4093 bytes --]

I'm sorry to report that the problem is not (in practice) better on
LTE, even though the standard may support features that could be used
to mitigate the problem.

Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
from a computer tethered to a Samsung Galaxy Nexus running Android
4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.

The phone was stationary during the test and had four bars (a full
signal) of "4G" service. The computer ran a single full-throttle TCP
CUBIC download from one well-connected but unremarkable Linux host
(ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the
same tethered LTE interface. There were zero lost pings during the
entire test (606/606 delivered).

The RTT grows to 1-2 seconds and stays stable in that region for most
of the test, except for one 12-second period of >5 seconds RTT. We
have also tried measuring only "one-way delay" (instead of RTT) by
sending UDP datagrams out of the computer's Ethernet interface over
the Internet, over LTE to the cell phone and back to the originating
computer via USB tethering. This gives similar results to ICMP ping.

I don't doubt that the carriers could implement reasonable AQM or even
a smaller buffer at the head-end, or that the phone could implement
AQM for the uplink. For that matter I'm not sure the details of the
air interface (LTE vs. UMTS vs. 1xEV-DO) necessarily makes a
difference here.

But at present, at least with AT&T, Verizon, Sprint and T-Mobile in
Eastern Massachusetts, the carrier is willing to queue and hold on to
packets for >1 second. Even a single long-running TCP download (>15
megabytes) is enough to tickle this problem.

In the CCR paper, even flows >1 megabyte were almost nonexistent,
which may be part of how these findings are compatible.

On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
<ingemar.s.johansson@ericsson.com> wrote:
> Hi
>
> Include Mark's original post (below) as it was scrubbed
>
> I don't have an data of bufferbloat for wireline access and the fiber connection that I have at home shows little evidence of bufferbloat.
>
> Wireless access seems to be a different story though.
> After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang et al. I decided to make a few measurements of my own (hope that the attached png is not removed)
>
> The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a 3G modem attached.
> The throughput was computed from the wireshark logs and RTT was measured with ping (towards a webserver hosted by Akamai). The location is Luleå city centre, Sweden (fixed locations) and the measurement was made at lunchtime on Dec 6 2012 .
>
> During the measurement session I did some close to normal websurf, including watching embedded videoclips and youtube. In some cases the effects of bufferbloat was clearly noticeable.
> Admit that this is just one sample, a more elaborate study with more samples would be interesting to see.
>
> 3G has the interesting feature that packets are very seldom lost in downlink (data going to the terminal). I did not see a single packet loss in this test!. I wont elaborate on the reasons in this email.
> I would however believe that LTE is better off in this respect as long as AQM is implemented, mainly because LTE is a packet-switched architecture.
>
> /Ingemar
>
> Marks post.
> ********
> [I tried to post this in a couple places to ensure I hit folks who would
>  be interested.  If you end up with multiple copies of the email, my
>  apologies.  --allman]
>
> I know bufferbloat has been an interest of lots of folks recently.  So,
> I thought I'd flog a recent paper that presents a little data on the
> topic ...
>
>     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
>     Communication Review, 43(1), January 2013.
>     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>
> Its an initial paper.  I think more data would be great!
>
> allman
>
>
> --
> http://www.icir.org/mallman/
>
>
>
>

[-- Attachment #2: verizondown.png --]
[-- Type: image/png, Size: 17545 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
@ 2013-01-08 12:19   ` Ingemar Johansson S
  2013-01-08 12:44     ` Keith Winstein
  2013-01-09 14:07   ` Michael Richardson
  1 sibling, 1 reply; 43+ messages in thread
From: Ingemar Johansson S @ 2013-01-08 12:19 UTC (permalink / raw)
  To: Keith Winstein; +Cc: mallman, end2end-interest, bloat

Hi

Interesting graph, thanks for sharing it.
It is likely that the delay is only limited by TCPs maximum congestion window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s, giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at other time instants seems to give a similar figure. 
Do you see any packet loss ?

The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as the packets are buffered there. 
The eNodeB does not buffer up packets in UL* so I would in this particular case argue that the problem is best solved in the terminal.
Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing that is standardized also I cannot tell how feasible it is.

/Ingemar

BTW... UL = uplink
* RLC-AM retransmissions can be said to cause delay in the eNodeB but then again the main problem is that packets are being queued up in the terminals sendbuffer. The MAC layer HARQ can too cause some delay but this is a necessity to get an optimal performance for LTE, moreover the added delay due to HARQ reTx is marginal in this context.

> -----Original Message-----
> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 11:42
> To: Ingemar Johansson S
> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> mallman@icir.org
> Subject: Re: [e2e] bufferbloat paper
> 
> I'm sorry to report that the problem is not (in practice) better on LTE, even
> though the standard may support features that could be used to mitigate the
> problem.
> 
> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
> from a computer tethered to a Samsung Galaxy Nexus running Android
> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
> 
> The phone was stationary during the test and had four bars (a full
> signal) of "4G" service. The computer ran a single full-throttle TCP CUBIC
> download from one well-connected but unremarkable Linux host (ssh
> hostname 'cat /dev/urandom') while pinging at 4 Hz across the same
> tethered LTE interface. There were zero lost pings during the entire test
> (606/606 delivered).
> 
> The RTT grows to 1-2 seconds and stays stable in that region for most of the
> test, except for one 12-second period of >5 seconds RTT. We have also tried
> measuring only "one-way delay" (instead of RTT) by sending UDP datagrams
> out of the computer's Ethernet interface over the Internet, over LTE to the
> cell phone and back to the originating computer via USB tethering. This gives
> similar results to ICMP ping.
> 
> I don't doubt that the carriers could implement reasonable AQM or even a
> smaller buffer at the head-end, or that the phone could implement AQM for
> the uplink. For that matter I'm not sure the details of the air interface (LTE vs.
> UMTS vs. 1xEV-DO) necessarily makes a difference here.
> 
> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in Eastern
> Massachusetts, the carrier is willing to queue and hold on to packets for >1
> second. Even a single long-running TCP download (>15
> megabytes) is enough to tickle this problem.
> 
> In the CCR paper, even flows >1 megabyte were almost nonexistent, which
> may be part of how these findings are compatible.
> 
> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> <ingemar.s.johansson@ericsson.com> wrote:
> > Hi
> >
> > Include Mark's original post (below) as it was scrubbed
> >
> > I don't have an data of bufferbloat for wireline access and the fiber
> connection that I have at home shows little evidence of bufferbloat.
> >
> > Wireless access seems to be a different story though.
> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by
> > Jiang et al. I decided to make a few measurements of my own (hope that
> > the attached png is not removed)
> >
> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> with a 3G modem attached.
> > The throughput was computed from the wireshark logs and RTT was
> measured with ping (towards a webserver hosted by Akamai). The location is
> Luleå city centre, Sweden (fixed locations) and the measurement was made
> at lunchtime on Dec 6 2012 .
> >
> > During the measurement session I did some close to normal websurf,
> including watching embedded videoclips and youtube. In some cases the
> effects of bufferbloat was clearly noticeable.
> > Admit that this is just one sample, a more elaborate study with more
> samples would be interesting to see.
> >
> > 3G has the interesting feature that packets are very seldom lost in
> downlink (data going to the terminal). I did not see a single packet loss in this
> test!. I wont elaborate on the reasons in this email.
> > I would however believe that LTE is better off in this respect as long as
> AQM is implemented, mainly because LTE is a packet-switched architecture.
> >
> > /Ingemar
> >
> > Marks post.
> > ********
> > [I tried to post this in a couple places to ensure I hit folks who
> > would  be interested.  If you end up with multiple copies of the
> > email, my  apologies.  --allman]
> >
> > I know bufferbloat has been an interest of lots of folks recently.
> > So, I thought I'd flog a recent paper that presents a little data on
> > the topic ...
> >
> >     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
> >     Communication Review, 43(1), January 2013.
> >     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >
> > Its an initial paper.  I think more data would be great!
> >
> > allman
> >
> >
> > --
> > http://www.icir.org/mallman/
> >
> >
> >
> >

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08 12:19   ` Ingemar Johansson S
@ 2013-01-08 12:44     ` Keith Winstein
  2013-01-08 13:19       ` Ingemar Johansson S
  0 siblings, 1 reply; 43+ messages in thread
From: Keith Winstein @ 2013-01-08 12:44 UTC (permalink / raw)
  To: Ingemar Johansson S; +Cc: end2end-interest, bloat

Hello Ingemar,

Thanks for your feedback and your own graph.

This is testing the LTE downlink, not the uplink. It was a TCP download.

There was zero packet loss on the ICMP pings. I did not measure the
TCP flow itself but I suspect packet loss was minimal if not also
zero.

Best,
Keith

On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S
<ingemar.s.johansson@ericsson.com> wrote:
> Hi
>
> Interesting graph, thanks for sharing it.
> It is likely that the delay is only limited by TCPs maximum congestion window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s, giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at other time instants seems to give a similar figure.
> Do you see any packet loss ?
>
> The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as the packets are buffered there.
> The eNodeB does not buffer up packets in UL* so I would in this particular case argue that the problem is best solved in the terminal.
> Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing that is standardized also I cannot tell how feasible it is.
>
> /Ingemar
>
> BTW... UL = uplink
> * RLC-AM retransmissions can be said to cause delay in the eNodeB but then again the main problem is that packets are being queued up in the terminals sendbuffer. The MAC layer HARQ can too cause some delay but this is a necessity to get an optimal performance for LTE, moreover the added delay due to HARQ reTx is marginal in this context.
>
>> -----Original Message-----
>> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
>> Winstein
>> Sent: den 8 januari 2013 11:42
>> To: Ingemar Johansson S
>> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
>> mallman@icir.org
>> Subject: Re: [e2e] bufferbloat paper
>>
>> I'm sorry to report that the problem is not (in practice) better on LTE, even
>> though the standard may support features that could be used to mitigate the
>> problem.
>>
>> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
>> from a computer tethered to a Samsung Galaxy Nexus running Android
>> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
>>
>> The phone was stationary during the test and had four bars (a full
>> signal) of "4G" service. The computer ran a single full-throttle TCP CUBIC
>> download from one well-connected but unremarkable Linux host (ssh
>> hostname 'cat /dev/urandom') while pinging at 4 Hz across the same
>> tethered LTE interface. There were zero lost pings during the entire test
>> (606/606 delivered).
>>
>> The RTT grows to 1-2 seconds and stays stable in that region for most of the
>> test, except for one 12-second period of >5 seconds RTT. We have also tried
>> measuring only "one-way delay" (instead of RTT) by sending UDP datagrams
>> out of the computer's Ethernet interface over the Internet, over LTE to the
>> cell phone and back to the originating computer via USB tethering. This gives
>> similar results to ICMP ping.
>>
>> I don't doubt that the carriers could implement reasonable AQM or even a
>> smaller buffer at the head-end, or that the phone could implement AQM for
>> the uplink. For that matter I'm not sure the details of the air interface (LTE vs.
>> UMTS vs. 1xEV-DO) necessarily makes a difference here.
>>
>> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in Eastern
>> Massachusetts, the carrier is willing to queue and hold on to packets for >1
>> second. Even a single long-running TCP download (>15
>> megabytes) is enough to tickle this problem.
>>
>> In the CCR paper, even flows >1 megabyte were almost nonexistent, which
>> may be part of how these findings are compatible.
>>
>> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
>> <ingemar.s.johansson@ericsson.com> wrote:
>> > Hi
>> >
>> > Include Mark's original post (below) as it was scrubbed
>> >
>> > I don't have an data of bufferbloat for wireline access and the fiber
>> connection that I have at home shows little evidence of bufferbloat.
>> >
>> > Wireless access seems to be a different story though.
>> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by
>> > Jiang et al. I decided to make a few measurements of my own (hope that
>> > the attached png is not removed)
>> >
>> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
>> with a 3G modem attached.
>> > The throughput was computed from the wireshark logs and RTT was
>> measured with ping (towards a webserver hosted by Akamai). The location is
>> Luleå city centre, Sweden (fixed locations) and the measurement was made
>> at lunchtime on Dec 6 2012 .
>> >
>> > During the measurement session I did some close to normal websurf,
>> including watching embedded videoclips and youtube. In some cases the
>> effects of bufferbloat was clearly noticeable.
>> > Admit that this is just one sample, a more elaborate study with more
>> samples would be interesting to see.
>> >
>> > 3G has the interesting feature that packets are very seldom lost in
>> downlink (data going to the terminal). I did not see a single packet loss in this
>> test!. I wont elaborate on the reasons in this email.
>> > I would however believe that LTE is better off in this respect as long as
>> AQM is implemented, mainly because LTE is a packet-switched architecture.
>> >
>> > /Ingemar
>> >
>> > Marks post.
>> > ********
>> > [I tried to post this in a couple places to ensure I hit folks who
>> > would  be interested.  If you end up with multiple copies of the
>> > email, my  apologies.  --allman]
>> >
>> > I know bufferbloat has been an interest of lots of folks recently.
>> > So, I thought I'd flog a recent paper that presents a little data on
>> > the topic ...
>> >
>> >     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
>> >     Communication Review, 43(1), January 2013.
>> >     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>> >
>> > Its an initial paper.  I think more data would be great!
>> >
>> > allman
>> >
>> >
>> > --
>> > http://www.icir.org/mallman/
>> >
>> >
>> >
>> >

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08 12:44     ` Keith Winstein
@ 2013-01-08 13:19       ` Ingemar Johansson S
  2013-01-08 15:29         ` dpreed
  0 siblings, 1 reply; 43+ messages in thread
From: Ingemar Johansson S @ 2013-01-08 13:19 UTC (permalink / raw)
  To: Keith Winstein; +Cc: end2end-interest, bloat

OK...

Likely means that AQM is not turned on in the eNodeB, can't be 100% sure though but it seems so.
At least one company I know of  offers AQM in eNodeB. However one problem seems to be that the only thing that counts is peak throughput, you have probably too seen these "up to X Mbps" slogans.  Competition is fierce snd for this reason it could be tempting to turn off AQM as it may reduce peak throughput slightly. I know and most people on these mailing lists knows that peak throughput is the "mexapixels" of the internet, one need to address other aspects in the benchmarks.

/Ingemar


> -----Original Message-----
> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 13:44
> To: Ingemar Johansson S
> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> mallman@icir.org
> Subject: Re: [e2e] bufferbloat paper
> 
> Hello Ingemar,
> 
> Thanks for your feedback and your own graph.
> 
> This is testing the LTE downlink, not the uplink. It was a TCP download.
> 
> There was zero packet loss on the ICMP pings. I did not measure the TCP
> flow itself but I suspect packet loss was minimal if not also zero.
> 
> Best,
> Keith
> 
> On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S
> <ingemar.s.johansson@ericsson.com> wrote:
> > Hi
> >
> > Interesting graph, thanks for sharing it.
> > It is likely that the delay is only limited by TCPs maximum congestion
> window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s,
> giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at
> other time instants seems to give a similar figure.
> > Do you see any packet loss ?
> >
> > The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as
> the packets are buffered there.
> > The eNodeB does not buffer up packets in UL* so I would in this particular
> case argue that the problem is best solved in the terminal.
> > Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing
> that is standardized also I cannot tell how feasible it is.
> >
> > /Ingemar
> >
> > BTW... UL = uplink
> > * RLC-AM retransmissions can be said to cause delay in the eNodeB but
> then again the main problem is that packets are being queued up in the
> terminals sendbuffer. The MAC layer HARQ can too cause some delay but
> this is a necessity to get an optimal performance for LTE, moreover the
> added delay due to HARQ reTx is marginal in this context.
> >
> >> -----Original Message-----
> >> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of
> >> Keith Winstein
> >> Sent: den 8 januari 2013 11:42
> >> To: Ingemar Johansson S
> >> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> >> mallman@icir.org
> >> Subject: Re: [e2e] bufferbloat paper
> >>
> >> I'm sorry to report that the problem is not (in practice) better on
> >> LTE, even though the standard may support features that could be used
> >> to mitigate the problem.
> >>
> >> Here is a plot (also at
> >> http://web.mit.edu/keithw/www/verizondown.png)
> >> from a computer tethered to a Samsung Galaxy Nexus running Android
> >> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
> >>
> >> The phone was stationary during the test and had four bars (a full
> >> signal) of "4G" service. The computer ran a single full-throttle TCP
> >> CUBIC download from one well-connected but unremarkable Linux host
> >> (ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the
> >> same tethered LTE interface. There were zero lost pings during the
> >> entire test
> >> (606/606 delivered).
> >>
> >> The RTT grows to 1-2 seconds and stays stable in that region for most
> >> of the test, except for one 12-second period of >5 seconds RTT. We
> >> have also tried measuring only "one-way delay" (instead of RTT) by
> >> sending UDP datagrams out of the computer's Ethernet interface over
> >> the Internet, over LTE to the cell phone and back to the originating
> >> computer via USB tethering. This gives similar results to ICMP ping.
> >>
> >> I don't doubt that the carriers could implement reasonable AQM or
> >> even a smaller buffer at the head-end, or that the phone could
> >> implement AQM for the uplink. For that matter I'm not sure the details of
> the air interface (LTE vs.
> >> UMTS vs. 1xEV-DO) necessarily makes a difference here.
> >>
> >> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in
> >> Eastern Massachusetts, the carrier is willing to queue and hold on to
> >> packets for >1 second. Even a single long-running TCP download (>15
> >> megabytes) is enough to tickle this problem.
> >>
> >> In the CCR paper, even flows >1 megabyte were almost nonexistent,
> >> which may be part of how these findings are compatible.
> >>
> >> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> >> <ingemar.s.johansson@ericsson.com> wrote:
> >> > Hi
> >> >
> >> > Include Mark's original post (below) as it was scrubbed
> >> >
> >> > I don't have an data of bufferbloat for wireline access and the
> >> > fiber
> >> connection that I have at home shows little evidence of bufferbloat.
> >> >
> >> > Wireless access seems to be a different story though.
> >> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks"
> >> > by Jiang et al. I decided to make a few measurements of my own
> >> > (hope that the attached png is not removed)
> >> >
> >> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> >> with a 3G modem attached.
> >> > The throughput was computed from the wireshark logs and RTT was
> >> measured with ping (towards a webserver hosted by Akamai). The
> >> location is Luleå city centre, Sweden (fixed locations) and the
> >> measurement was made at lunchtime on Dec 6 2012 .
> >> >
> >> > During the measurement session I did some close to normal websurf,
> >> including watching embedded videoclips and youtube. In some cases the
> >> effects of bufferbloat was clearly noticeable.
> >> > Admit that this is just one sample, a more elaborate study with
> >> > more
> >> samples would be interesting to see.
> >> >
> >> > 3G has the interesting feature that packets are very seldom lost in
> >> downlink (data going to the terminal). I did not see a single packet
> >> loss in this test!. I wont elaborate on the reasons in this email.
> >> > I would however believe that LTE is better off in this respect as
> >> > long as
> >> AQM is implemented, mainly because LTE is a packet-switched
> architecture.
> >> >
> >> > /Ingemar
> >> >
> >> > Marks post.
> >> > ********
> >> > [I tried to post this in a couple places to ensure I hit folks who
> >> > would  be interested.  If you end up with multiple copies of the
> >> > email, my  apologies.  --allman]
> >> >
> >> > I know bufferbloat has been an interest of lots of folks recently.
> >> > So, I thought I'd flog a recent paper that presents a little data
> >> > on the topic ...
> >> >
> >> >     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
> >> >     Communication Review, 43(1), January 2013.
> >> >     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >> >
> >> > Its an initial paper.  I think more data would be great!
> >> >
> >> > allman
> >> >
> >> >
> >> > --
> >> > http://www.icir.org/mallman/
> >> >
> >> >
> >> >
> >> >

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08  7:35 [Bloat] bufferbloat paper Ingemar Johansson S
  2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
@ 2013-01-08 15:04 ` dpreed
  2013-01-18 22:00 ` [Bloat] " Haiqing Jiang
  2 siblings, 0 replies; 43+ messages in thread
From: dpreed @ 2013-01-08 15:04 UTC (permalink / raw)
  To: Ingemar Johansson S; +Cc: end2end-interest, bloat

[-- Attachment #1: Type: text/plain, Size: 3916 bytes --]


[This mail won't go to "end2end-interest" because I am blocked from posting there, but I leave the address on so that I don't narrow the "reply-to" list for those replying to me. I receive but can not send there.]
 
Looking at your graph, Ingemar, the problem is in the extreme cases, which are hardly rare.   Note the scale is in *seconds* on RTT.   This correlates with excess buffering creating stable, extremely long queues.  I've been observing this for years on cellular networks - 3G, and now Verizon's deployment of LTE (data collection in process).
 
Regarding your lack of "experiencing it in wired" connections, I can only suggest this - perhaps you don't have any heavy load traffic sources competing for the bottleneck link.
 
To demonstrate the bad effects of bufferbloat, I'd suggest using the "rrul" test developed by toke@toke.dk .  It simulates the "Daddy, the Internet is broken" scenario - a really heavy upload source, while measuring ping-time.   I submit that the kinds of times I've seen on DOCSIS cable modems pretty consistently is close to a second latency on the uplink, even when the uplink is 2 Mb/sec or more.
 
The problem is that the latency due to bufferbloat is not "random" - it is "caused", and it *can* be fixed.
 
The first order fix is to bound the delay time through the bottleneck buffer to 20 msec. or less.  On a high capacity wireless link, that's appropriate - more would only cause the endpoint TCP to open its window wider and wider.
 
-----Original Message-----
From: "Ingemar Johansson S" <ingemar.s.johansson@ericsson.com>
Sent: Tuesday, January 8, 2013 2:35am
To: "end2end-interest@postel.org" <end2end-interest@postel.org>, "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
Cc: "mallman@icir.org" <mallman@icir.org>
Subject: Re: [e2e] bufferbloat paper



Hi

Include Mark's original post (below) as it was scrubbed

I don't have an data of bufferbloat for wireline access and the fiber connection that I have at home shows little evidence of bufferbloat.

Wireless access seems to be a different story though.
After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang et al. I decided to make a few measurements of my own (hope that the attached png is not removed) 

The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a 3G modem attached. 
The throughput was computed from the wireshark logs and RTT was measured with ping (towards a webserver hosted by Akamai). The location is Luleå city centre, Sweden (fixed locations) and the measurement was made at lunchtime on Dec 6 2012 . 

During the measurement session I did some close to normal websurf, including watching embedded videoclips and youtube. In some cases the effects of bufferbloat was clearly noticeable. 
Admit that this is just one sample, a more elaborate study with more samples would be interesting to see.

3G has the interesting feature that packets are very seldom lost in downlink (data going to the terminal). I did not see a single packet loss in this test!. I wont elaborate on the reasons in this email.
I would however believe that LTE is better off in this respect as long as AQM is implemented, mainly because LTE is a packet-switched architecture.

/Ingemar

Marks post.
********
[I tried to post this in a couple places to ensure I hit folks who would
 be interested.  If you end up with multiple copies of the email, my
 apologies.  --allman]

I know bufferbloat has been an interest of lots of folks recently.  So,
I thought I'd flog a recent paper that presents a little data on the
topic ...

 Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
 Communication Review, 43(1), January 2013.
 http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf

Its an initial paper.  I think more data would be great!

allman


--
http://www.icir.org/mallman/





[-- Attachment #2: Type: text/html, Size: 4795 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08 13:19       ` Ingemar Johansson S
@ 2013-01-08 15:29         ` dpreed
  2013-01-08 16:40           ` Mark Allman
  0 siblings, 1 reply; 43+ messages in thread
From: dpreed @ 2013-01-08 15:29 UTC (permalink / raw)
  To: Ingemar Johansson S; +Cc: Keith Winstein, bloat, end2end-interest

[-- Attachment #1: Type: text/plain, Size: 11026 bytes --]


Re: "the only thing that counts is peak throughput" - it's a pretty cynical stance to say "I'm a professional engineer, but the marketing guys don't have a clue, so I'm not going to build a usable system".
 
It's even worse when fellow engineers *disparage* or downplay the work of engineers who are actually trying hard to fix this across the entire Internet.
 
Does competition require such foolishness?   Have any of the folks who work for operators and equipment suppliers followed Richard Woundy's lead (he is SVP at Comcast) and tried to *fix* the problem and get the fix deployed.  Richard is an engineer, and took the time to develop a proposed fix to DOCSIS 3.0, and also to write a "best practices" document about how to deploy that fix.  The one thing he could not do is get Comcast or its competitors to invest money in deploying the fix more rapidly.
 
First, it's important to measure the "right thing" - which in this case is "how much queueing *delay* builds up in the bottleneck link under load" and how bad is the user experience when that queueing delay stabilizes at more than about 20 msec.
 
That cannot be determined by measuring throughput, which is all the operators measure.  (I have the sworn testimony of every provider in Canada when asked by the CRTC "do you measure latency on your internet service", the answer was uniformly - we measure throughput *only*, and by Little's Lemma we can determine latency).
 
Engineers actually have a positive duty to society, not just to profits.  And actually, in this case, better service *would* lead to more profits!  Not directly, but because there is competition for experience, even more than for "bitrate", despite the claims of engineers.
 
So talk to your CEO's.  When I've done so, they say they have *never* heard of the issue.  Maybe that's due to denial throughout the organization.
 
(by the way, what woke Comcast up was getting hauled in front of the FCC for deploying DPI-based RST injection that disrupted large classes of connections - because they had not realized what their problem was, and the marketers wanted to blame "pirates" for clogging the circuits - for which claim they had no data other than self-serving and proprietary "studies" from the vendors like Sandvine and Ellacoya).
 
Actual measurements of actual network behavior revealed the bufferbloat phenomenon was the cause of disruptive events due to load in *every* case observed by me, and I've looked at a lot.  It used to happen on Frame Relay links all the time, and in datacenter TCP/IP internal deployments.
 
So measure first.  Measure the right thing (latency growth under load).  Ask "why is this happening?"  and don't jump to the non sequitur (pirates or "interference") without proving that the non sequitur actually explains the entire phenomenon (something Comcast failed to do, instead reasoning from anecdotal links between bittorrent and the problem.
 
And then when your measurements are right, and you can demonstrate a solution that *works* (rather than something that in academia would be an "interesting Ph.D. proposal"), then deploy it and monitor it.
 
 
 
-----Original Message-----
From: "Ingemar Johansson S" <ingemar.s.johansson@ericsson.com>
Sent: Tuesday, January 8, 2013 8:19am
To: "Keith Winstein" <keithw@mit.edu>
Cc: "mallman@icir.org" <mallman@icir.org>, "end2end-interest@postel.org" <end2end-interest@postel.org>, "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
Subject: Re: [e2e] bufferbloat paper



OK...

Likely means that AQM is not turned on in the eNodeB, can't be 100% sure though but it seems so.
At least one company I know of  offers AQM in eNodeB. However one problem seems to be that the only thing that counts is peak throughput, you have probably too seen these "up to X Mbps" slogans.  Competition is fierce snd for this reason it could be tempting to turn off AQM as it may reduce peak throughput slightly. I know and most people on these mailing lists knows that peak throughput is the "mexapixels" of the internet, one need to address other aspects in the benchmarks.

/Ingemar


> -----Original Message-----
> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 13:44
> To: Ingemar Johansson S
> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> mallman@icir.org
> Subject: Re: [e2e] bufferbloat paper
> 
> Hello Ingemar,
> 
> Thanks for your feedback and your own graph.
> 
> This is testing the LTE downlink, not the uplink. It was a TCP download.
> 
> There was zero packet loss on the ICMP pings. I did not measure the TCP
> flow itself but I suspect packet loss was minimal if not also zero.
> 
> Best,
> Keith
> 
> On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S
> <ingemar.s.johansson@ericsson.com> wrote:
> > Hi
> >
> > Interesting graph, thanks for sharing it.
> > It is likely that the delay is only limited by TCPs maximum congestion
> window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s,
> giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at
> other time instants seems to give a similar figure.
> > Do you see any packet loss ?
> >
> > The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as
> the packets are buffered there.
> > The eNodeB does not buffer up packets in UL* so I would in this particular
> case argue that the problem is best solved in the terminal.
> > Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing
> that is standardized also I cannot tell how feasible it is.
> >
> > /Ingemar
> >
> > BTW... UL = uplink
> > * RLC-AM retransmissions can be said to cause delay in the eNodeB but
> then again the main problem is that packets are being queued up in the
> terminals sendbuffer. The MAC layer HARQ can too cause some delay but
> this is a necessity to get an optimal performance for LTE, moreover the
> added delay due to HARQ reTx is marginal in this context.
> >
> >> -----Original Message-----
> >> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of
> >> Keith Winstein
> >> Sent: den 8 januari 2013 11:42
> >> To: Ingemar Johansson S
> >> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> >> mallman@icir.org
> >> Subject: Re: [e2e] bufferbloat paper
> >>
> >> I'm sorry to report that the problem is not (in practice) better on
> >> LTE, even though the standard may support features that could be used
> >> to mitigate the problem.
> >>
> >> Here is a plot (also at
> >> http://web.mit.edu/keithw/www/verizondown.png)
> >> from a computer tethered to a Samsung Galaxy Nexus running Android
> >> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
> >>
> >> The phone was stationary during the test and had four bars (a full
> >> signal) of "4G" service. The computer ran a single full-throttle TCP
> >> CUBIC download from one well-connected but unremarkable Linux host
> >> (ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the
> >> same tethered LTE interface. There were zero lost pings during the
> >> entire test
> >> (606/606 delivered).
> >>
> >> The RTT grows to 1-2 seconds and stays stable in that region for most
> >> of the test, except for one 12-second period of >5 seconds RTT. We
> >> have also tried measuring only "one-way delay" (instead of RTT) by
> >> sending UDP datagrams out of the computer's Ethernet interface over
> >> the Internet, over LTE to the cell phone and back to the originating
> >> computer via USB tethering. This gives similar results to ICMP ping.
> >>
> >> I don't doubt that the carriers could implement reasonable AQM or
> >> even a smaller buffer at the head-end, or that the phone could
> >> implement AQM for the uplink. For that matter I'm not sure the details of
> the air interface (LTE vs.
> >> UMTS vs. 1xEV-DO) necessarily makes a difference here.
> >>
> >> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in
> >> Eastern Massachusetts, the carrier is willing to queue and hold on to
> >> packets for >1 second. Even a single long-running TCP download (>15
> >> megabytes) is enough to tickle this problem.
> >>
> >> In the CCR paper, even flows >1 megabyte were almost nonexistent,
> >> which may be part of how these findings are compatible.
> >>
> >> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> >> <ingemar.s.johansson@ericsson.com> wrote:
> >> > Hi
> >> >
> >> > Include Mark's original post (below) as it was scrubbed
> >> >
> >> > I don't have an data of bufferbloat for wireline access and the
> >> > fiber
> >> connection that I have at home shows little evidence of bufferbloat.
> >> >
> >> > Wireless access seems to be a different story though.
> >> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks"
> >> > by Jiang et al. I decided to make a few measurements of my own
> >> > (hope that the attached png is not removed)
> >> >
> >> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> >> with a 3G modem attached.
> >> > The throughput was computed from the wireshark logs and RTT was
> >> measured with ping (towards a webserver hosted by Akamai). The
> >> location is Luleå city centre, Sweden (fixed locations) and the
> >> measurement was made at lunchtime on Dec 6 2012 .
> >> >
> >> > During the measurement session I did some close to normal websurf,
> >> including watching embedded videoclips and youtube. In some cases the
> >> effects of bufferbloat was clearly noticeable.
> >> > Admit that this is just one sample, a more elaborate study with
> >> > more
> >> samples would be interesting to see.
> >> >
> >> > 3G has the interesting feature that packets are very seldom lost in
> >> downlink (data going to the terminal). I did not see a single packet
> >> loss in this test!. I wont elaborate on the reasons in this email.
> >> > I would however believe that LTE is better off in this respect as
> >> > long as
> >> AQM is implemented, mainly because LTE is a packet-switched
> architecture.
> >> >
> >> > /Ingemar
> >> >
> >> > Marks post.
> >> > ********
> >> > [I tried to post this in a couple places to ensure I hit folks who
> >> > would  be interested.  If you end up with multiple copies of the
> >> > email, my  apologies.  --allman]
> >> >
> >> > I know bufferbloat has been an interest of lots of folks recently.
> >> > So, I thought I'd flog a recent paper that presents a little data
> >> > on the topic ...
> >> >
> >> >     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
> >> >     Communication Review, 43(1), January 2013.
> >> >     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >> >
> >> > Its an initial paper.  I think more data would be great!
> >> >
> >> > allman
> >> >
> >> >
> >> > --
> >> > http://www.icir.org/mallman/
> >> >
> >> >
> >> >
> >> >


[-- Attachment #2: Type: text/html, Size: 14232 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08 15:29         ` dpreed
@ 2013-01-08 16:40           ` Mark Allman
  0 siblings, 0 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-08 16:40 UTC (permalink / raw)
  To: dpreed; +Cc: Ingemar Johansson S, Keith Winstein, end2end-interest, bloat

[-- Attachment #1: Type: text/plain, Size: 1416 bytes --]


David-

I completely agree with your "measure it" notion.  That is one of the
points of my paper.

> First, it's important to measure the "right thing" - which in this
> case is "how much queueing *delay* builds up in the bottleneck link
> under load"

That said, as often is the case there is no "right thing", but rather a
number of "right thingS" to measure.  And/or, when you go off to measure
the "right thing" there are additional facets that come to light that
one must understand.  

Certainly understanding how much possible delay can build up in a queue
is a worthwhile thing to measure.  There are good results of this in the
netalyzr paper.  Further, as Injong mentioned there are results of this
potential queue buildup from mobile networks in his recent IMC paper.

However, the point of my contribution is that if this queue buildup
happens only in the middle of the night on the second Sunday of every
leap year for 1.5sec, that'd be good to know and probably it'd make the
behavior something to not specifically engineer around in a best effort
network.  So, then, lets ask about the scale and scope of this behavior
and answer it with systematic, empirical assessment instead of anecdotes
and extrapolation.  Surely that is not a controversial position.

My results are modest at best.  Other people have different (probably
better) vantage points.  They should look at some data, too.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
  2013-01-08 12:19   ` Ingemar Johansson S
@ 2013-01-09 14:07   ` Michael Richardson
  2013-01-10  7:37     ` Keith Winstein
  1 sibling, 1 reply; 43+ messages in thread
From: Michael Richardson @ 2013-01-09 14:07 UTC (permalink / raw)
  To: Keith Winstein; +Cc: bloat, end2end-interest


Keith, thank you for this work.

I don't know enough about how LTE towers buffer things, so my question.

Have you considered repeating your test with two phones?
Can the download on phone1 affect the latency seen by a second phone?

Obviously the phones should be located right next to each other, with
some verification that they are actually associated to the same tower.

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works        | network architect  [ 
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [ 
	

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-09 14:07   ` Michael Richardson
@ 2013-01-10  7:37     ` Keith Winstein
  2013-01-10 13:46       ` Michael Richardson
  0 siblings, 1 reply; 43+ messages in thread
From: Keith Winstein @ 2013-01-10  7:37 UTC (permalink / raw)
  To: Michael Richardson; +Cc: bloat, end2end-interest

Hello Michael,

On Wed, Jan 9, 2013 at 9:07 AM, Michael Richardson <mcr@sandelman.ca> wrote:
> Have you considered repeating your test with two phones?

Yes, we have tried up to four phones at the same time.

> Can the download on phone1 affect the latency seen by a second phone?

In our experience, a download on phone1 will not affect the unloaded
latency seen by phone2. The cell towers appear to use a per-UE
(per-phone) queue on uplink and downlink. (This is similar to what a
commodity cable modem user sees -- I don't get long delays just
because my neighbor is saturating his uplink or downlink and causing a
standing queue for himself.)

However, a download on phone1 can affect the average throughput seen
by phone2 when it is saturating its link, suggesting that the two
phones are contending for the same limited resource (timeslices and
OFDM resource blocks, or possibly just backhaul throughput).

> Obviously the phones should be located right next to each other, with
> some verification that they are actually associated to the same tower.

This is harder than we thought it would be -- the phones have a
tendency to wander around rapidly among cell IDs (sometimes switching
several times in a minute). We're not sure if the different cell IDs
really represent different towers (we doubt it) or maybe just
different LTE channels or logical channels. I understand in LTE it is
possible for multiple towers to cooperate to receive one packet, so
the story may be more complicated.

In practice it is possible to get four phones to "hold still" on the
same cell ID for five minutes to do a test, but it is a bit like
herding cats and requires some careful placement and luck.

Best regards,
Keith

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] [e2e] bufferbloat paper
  2013-01-10  7:37     ` Keith Winstein
@ 2013-01-10 13:46       ` Michael Richardson
  0 siblings, 0 replies; 43+ messages in thread
From: Michael Richardson @ 2013-01-10 13:46 UTC (permalink / raw)
  To: Keith Winstein; +Cc: bloat, end2end-interest


Thanks for the reply, comments below.

>>>>> "Keith" == Keith Winstein <keithw@mit.edu> writes:
    >> Have you considered repeating your test with two phones?

    Keith> Yes, we have tried up to four phones at the same time.

    >> Can the download on phone1 affect the latency seen by a second phone?

    Keith> In our experience, a download on phone1 will not affect the unloaded
    Keith> latency seen by phone2. The cell towers appear to use a per-UE
    Keith> (per-phone) queue on uplink and downlink. (This is similar to what a
    Keith> commodity cable modem user sees -- I don't get long delays just
    Keith> because my neighbor is saturating his uplink or downlink and
    Keith> causing a 
    Keith> standing queue for himself.)

This is good news of a sort.
It means that there is no shared xmit queue on the tower, and that the
4G/LTE/whatever-they-call-it-today business of moving voice to VoIP is
going to do okay.   
The question then becomes: how to get all of one's low-latency traffic
onto that second channel!

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works        | network architect  [ 
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [ 
	

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-08  7:35 [Bloat] bufferbloat paper Ingemar Johansson S
  2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
  2013-01-08 15:04 ` dpreed
@ 2013-01-18 22:00 ` Haiqing Jiang
  2 siblings, 0 replies; 43+ messages in thread
From: Haiqing Jiang @ 2013-01-18 22:00 UTC (permalink / raw)
  To: Ingemar Johansson S; +Cc: end2end-interest, bloat

[-- Attachment #1: Type: text/plain, Size: 3310 bytes --]

Hi

It's really happy to know that you are verifying the problem I pointed out
in my paper. It's quite urgent to pay more attentions to the bufferbloat in
CellNet in my opinion.

But because of the lack of connections inside carriers (AT&T, Verizon,
etc.), in my work I still found some limitations to figure out the
fundamental answers to 1). where the buffers exactly are; 2). how the
buffers are built up with interacting with LTE/HSPA/EVDO protocols; 3). how
common it is for large scale daily life usage the problem could lower down
user experiences..... All those problems, I hope to see deeper discussions
in this maillist. Thanks....

Best,
Haiqing Jiang

On Mon, Jan 7, 2013 at 11:35 PM, Ingemar Johansson S <
ingemar.s.johansson@ericsson.com> wrote:

> Hi
>
> Include Mark's original post (below) as it was scrubbed
>
> I don't have an data of bufferbloat for wireline access and the fiber
> connection that I have at home shows little evidence of bufferbloat.
>
> Wireless access seems to be a different story though.
> After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang
> et al. I decided to make a few measurements of my own (hope that the
> attached png is not removed)
>
> The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a
> 3G modem attached.
> The throughput was computed from the wireshark logs and RTT was measured
> with ping (towards a webserver hosted by Akamai). The location is Luleå
> city centre, Sweden (fixed locations) and the measurement was made at
> lunchtime on Dec 6 2012 .
>
> During the measurement session I did some close to normal websurf,
> including watching embedded videoclips and youtube. In some cases the
> effects of bufferbloat was clearly noticeable.
> Admit that this is just one sample, a more elaborate study with more
> samples would be interesting to see.
>
> 3G has the interesting feature that packets are very seldom lost in
> downlink (data going to the terminal). I did not see a single packet loss
> in this test!. I wont elaborate on the reasons in this email.
> I would however believe that LTE is better off in this respect as long as
> AQM is implemented, mainly because LTE is a packet-switched architecture.
>
> /Ingemar
>
> Marks post.
> ********
> [I tried to post this in a couple places to ensure I hit folks who would
>  be interested.  If you end up with multiple copies of the email, my
>  apologies.  --allman]
>
> I know bufferbloat has been an interest of lots of folks recently.  So,
> I thought I'd flog a recent paper that presents a little data on the
> topic ...
>
>     Mark Allman.  Comments on Bufferbloat, ACM SIGCOMM Computer
>     Communication Review, 43(1), January 2013.
>     http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>
> Its an initial paper.  I think more data would be great!
>
> allman
>
>
> --
> http://www.icir.org/mallman/
>
>
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>


-- 
-----------------------------------
Haiqing Jiang,
Computer Science Department, North Carolina State University
Homepage:  https://sites.google.com/site/hqjiang1988/

[-- Attachment #2: Type: text/html, Size: 4170 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09  5:02   ` David Lang
@ 2013-01-18  1:23     ` grenville armitage
  0 siblings, 0 replies; 43+ messages in thread
From: grenville armitage @ 2013-01-18  1:23 UTC (permalink / raw)
  To: bloat



On 01/09/2013 16:02, David Lang wrote:
	.[.]
> I really like the idea of trying to measure latency by sniffing the network and watching the time for responses.

Probably tangential, but http://caia.swin.edu.au/tools/spp/ has proven useful to our group for measuring RTT between two arbitrary packet capture points, for symmetric or asymmetric UDP or TCP traffic flows.

Even more tangential, http://dx.doi.org/10.1109/LCN.2005.101 ("Passive TCP Stream Estimation of RTT and Jitter Parameters") might be an interesting algorithm to implement for estimating RTT of TCP flows seen at a single capture point. (This 2005 paper describes an extension of a technique used by tstat at the time.)

cheers,
gja


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09 20:31           ` Michael Richardson
@ 2013-01-10 18:05             ` Mark Allman
  0 siblings, 0 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-10 18:05 UTC (permalink / raw)
  To: Michael Richardson; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1116 bytes --]


> >>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
>     >> 1) do you max out your 1Gb/s uplink at all?
> 
>     Mark> No.  I do not believe we have seen peaks anywhere close to 1Gbps.
> 
> nice.. what amount of oversubscription does this represent?

Um, something like "a metric shitload". :-)

The overall FTTH experiment is basically posing the question "what could
we use networks for if we took away the capacity limits?".

> What is the layer-3 architecture for the CCZ?  Does traffic between
> residences come through the "head end" the way it does in cable, or
> does it cut-across at layer-2, and perhaps, you can not see it?

The homes run into a switch.  Traffic between homes is taken care of
without going further.  There is a 1Gbps link out of the switch to the
ISP.  We mirror the port that connects the switch to he outside world.
So, we have no visibility of traffic that stays within the CCZ.

> Have you considered isolating the data samples which are from CCZ and
> to CCZ, and then tried to predict which flows might involved 802.11b
> or g final hops?

We have not done that.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09  0:03       ` David Lang
@ 2013-01-10 13:01         ` Mark Allman
  0 siblings, 0 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-10 13:01 UTC (permalink / raw)
  To: David Lang; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4648 bytes --]


> > (2) The network I am monitoring looks like this ...
> >
> >        LEH -> IHR -> SW -> Internet -> REH
> >
> >     where, "LEH" is the local end host and "IHR" is the in-home
> >     router provided by the FTTH project.  The connection between the
> >     LEH and the IHR can either be wired (at up to 1Gbps) or wireless
> >     (at much less than 1Gbps, but I forget the actual wireless
> >     technology used on the IHR).  The IHRs are all run into a switch
> >     (SW) at 1Gbps.  The switch connects to the Internet via a 1Gbps
> >     link (so, this is a theoretical bottleneck right here ...).  The
> >     "REH" is the remote end host.  We monitor via mirroring on SW.
> >
> >     The delay we measure is from SW to REH and back.  
> 
>  The issue is that if the home user has a 1G uplink to you, and then
>  you hae a 1G uplink to the Internet, there is not going to be very
>  much if any congestion in place. The only place where you are going
>  to have any buffering is in your 1G uplink to the Internet (and only
>  if there is enough traffic to cause congestion here)
> 
>  In the 'normal' residential situation, the LEH -> THR connection is
>  probably 1G if wired, but the THR -> SW connection is likely to be
>  <1M. Therefor the THR ends up buffering the outbound traffic.

(I assume 'THR' is what I called 'IHR'.)

You are too focused on the local side of the network that produced the
traffic and you are not understanding what was actually measured.  As I
say above, the delays are measured from SW to REH and back.  That *does
not* involve the LEH or the IHR in any way.  Look at the picture and
think about it for a minute.  Read my email.  Read email from others'
that has also tried to clarify the issue.

Let me try one more time to be as absolutely explicit as I can be.  Say,
...

  - LEH sends a data packet D that is destined for REH at time t0.

  - D is forwarded by IHR at time t1.

  - D is both forwarded by SW and recorded in my packet trace at time
    t2.

  - D traverses the wide area Internet and arrives at REH (which is
    whatever LEH happens to be talking to; not something I control or
    can monitor) at time t3.

  - At time t4 the REH will transmit an ACK A for data packet D.

  - A will go back across the wide-area Internet and eventually hit SW
    at time t5.  A will be both forwarded to IHR and recorded in my
    packet trace at this time.

  - A will be forwarded by IHR at time t6.

  - A will arrive at LEH at time t7.

The RTT sample I will take from this exchange is t5-t2.  Your discussion
of focuses on t7-t5 (the downlink) and t2-t0 (the uplink).  In other
words, you are talking about something different from what is presented
in the paper.  If you want to comment on the paper, that is fine.  But,
you should comment on what the paper says or what the paper is lacking
and not try to distort what the paper presents.

I fully understand that these FTTH links to the Internet are abnormal.
And, as such, if I were looking for buffers on the FTTH side of things
that'd be bogus.  But, I am not doing that.  Regardless of how many
times you say it.  The measurements are not of 90 homes, but of 118K
remote peers that the 90 homes happened to communicate with (and the
networks used to reach those 118K peers).  If it helps, you can think of
the 90 homes as just 90 homes connected to the Internet by whatever
technology (DSL, cable, fiber, wireless ...).  The measurements are not
concerned with the networks inside of connecting these 90 homes.

Look, I could complain about my own paper all day long and twice as much
on Sunday.  There are plenty of ways it is lacking.  Others could no
doubt do the same.  I have tried hard to use the right perspective and
to say what this data does and does not show.  I have done that on this
list and in the paper itself.  E.g., these are the first two bullets in
the Future Work section:

\item Bringing datasets from additional vantage points to bear on
  the questions surrounding bufferbloat is unquestionably useful.
  While we study bufferbloat related to 118K peers for some modest
  period of time (up to one week), following more peers and over the
  course of a longer period would be useful.

\item While we are able to assess 118K peers, we are only able to do
  so opportunistically when a host on the network we monitor
  communicates with those peers.  A vantage point that provides a
  more comprehensive view of residential peers' behavior would be
  useful.

So, complain away if you'd like.  I don't mind at all.  But, at least
complain about what is in the paper and what is actually measured.
Please. 

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09 20:19         ` Mark Allman
@ 2013-01-09 20:31           ` Michael Richardson
  2013-01-10 18:05             ` Mark Allman
  0 siblings, 1 reply; 43+ messages in thread
From: Michael Richardson @ 2013-01-09 20:31 UTC (permalink / raw)
  To: mallman; +Cc: bloat


>>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
    >> 1) do you max out your 1Gb/s uplink at all?

    Mark> No.  I do not believe we have seen peaks anywhere close to 1Gbps.

nice.. what amount of oversubscription does this represent?

What is the layer-3 architecture for the CCZ?  Does traffic between
residences come through the "head end" the way it does in cable, or does
it cut-across at layer-2, and perhaps, you can not see it?

Assuming that you can see it..

Have you considered isolating the data samples which are from CCZ and to
CCZ, and then tried to predict which flows might involved 802.11b or g
final hops?

You mentioned that CCZ provides a home router... do you know anything
about that?

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works        | network architect  [ 
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [ 
	

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09 20:14       ` Michael Richardson
@ 2013-01-09 20:19         ` Mark Allman
  2013-01-09 20:31           ` Michael Richardson
  0 siblings, 1 reply; 43+ messages in thread
From: Mark Allman @ 2013-01-09 20:19 UTC (permalink / raw)
  To: Michael Richardson; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 997 bytes --]


> >>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
>     Mark> less than 1Gbps, but I forget the actual wireless technology
>     Mark> used on 
>     Mark> the IHR).  The IHRs are all run into a switch (SW) at 1Gbps.  The
>     Mark> switch connects to the Internet via a 1Gbps link (so, this is a
>     Mark> theoretical bottleneck right here ...).  The "REH" is the
>     Mark> remote end 
>     Mark> host.  We monitor via mirroring on SW.
> 
> 1) do you max out your 1Gb/s uplink at all?

No.  I do not believe we have seen peaks anywhere close to 1Gbps.

> 2) have you investigated bufferbloat on that port of the switch?
>    (and do you have congestion issues on your mirror port?
>    I guess that the point of the loss analysis...)

Correct - that is the point of the measurement loss analysis.  We
believe we are losing very few packets during the measurement process.
Therefore, we believe the traces to be a faithful representation of what
has happened on-the-wire.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09 20:05 ` Michael Richardson
@ 2013-01-09 20:14   ` Mark Allman
  0 siblings, 0 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-09 20:14 UTC (permalink / raw)
  To: Michael Richardson; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 621 bytes --]


> It seems to me, that for a given host pair, there is some RTT
> theorectical minimum, which represents a completely empty network, and
> perhaps one can observe something close to it in the samples, and maybe
> a periodic ICMP ping would have been in order, particularly when the
> host pair was not observed to have any traffic flowing.  
> 
> The question of queuing delay then can be answered by how much higher
> the RTT is over some minimum. 

The solid lines in the plots in figure 1 are the minimums.  The lines on
figure 2 represent the difference between the samples and the
corresponding minimum.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08 13:55     ` Mark Allman
  2013-01-09  0:03       ` David Lang
@ 2013-01-09 20:14       ` Michael Richardson
  2013-01-09 20:19         ` Mark Allman
  1 sibling, 1 reply; 43+ messages in thread
From: Michael Richardson @ 2013-01-09 20:14 UTC (permalink / raw)
  To: mallman; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1658 bytes --]


>>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
    Mark> less than 1Gbps, but I forget the actual wireless technology used on
    Mark> the IHR).  The IHRs are all run into a switch (SW) at 1Gbps.  The
    Mark> switch connects to the Internet via a 1Gbps link (so, this is a
    Mark> theoretical bottleneck right here ...).  The "REH" is the remote end
    Mark> host.  We monitor via mirroring on SW.

1) do you max out your 1Gb/s uplink at all?
2) have you investigated bufferbloat on that port of the switch?
   (and do you have congestion issues on your mirror port?
   I guess that the point of the loss analysis...)

    Mark> (3) This data is not ideal.  Ideally I'd like to directly
    Mark> measure queues 
    Mark> in a bazillion places.  That'd be fabulous.  But, I am working with
    Mark> what I have.  I have traces that offer windows into the actual queue
    Mark> occupancy when the local users I monitor engage particular remote
    Mark> endpoints.  Is this representative of the delays I'd find when the
    Mark> local users are not engaging the remote end system?  I have no
    Mark> idea.  I'd certainly like to know.  But, the data doesn't tell me.
    Mark> I am reporting what I have.  It is something.  And, it is more than
    Mark> I have seen reported anywhere else.  Folks should go collect more
    Mark> data.

Thank you for this.

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works        | network architect  [ 
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [ 
	

[-- Attachment #2: Type: application/pgp-signature, Size: 307 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  2:24     ` David Lang
@ 2013-01-09 20:08       ` Michael Richardson
  0 siblings, 0 replies; 43+ messages in thread
From: Michael Richardson @ 2013-01-09 20:08 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 773 bytes --]


>>>>> "David" == David Lang <david@lang.hm> writes:
    David> typical conditions are

    David> 1G            1M
    David> desktop -----firewall ---- Internet

Agreed.
But, they are trying to measure:

         1G            1G            ?        7Mb/s           100M
desktop -----firewall ---- Internet ---- ISP ------- firewall ---- peer
                       ^^- sampling point

As I said, he is measuring at the location he can.
This is akin to measuring just on the outside of www.google.com.

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works        | network architect  [ 
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [ 
	

[-- Attachment #2: Type: application/pgp-signature, Size: 307 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
  2013-01-08  0:33 ` Dave Taht
  2013-01-08  1:54 ` Stephen Hemminger
@ 2013-01-09 20:05 ` Michael Richardson
  2013-01-09 20:14   ` Mark Allman
  2 siblings, 1 reply; 43+ messages in thread
From: Michael Richardson @ 2013-01-09 20:05 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 2023 bytes --]


(not having read the thread yet on purpose)

Reading the paper, my initial thought was: big queues can only happen
where there is a bottleneck, and on the 1Gb/s CCZ links, that's unlikely
to be the case.  Then I understood that he is measuring there because
(he can and) really he is measuring the delay to other peers, from a
place that can easily fill the various network queues.

I don't understand the analysis of RTT increases/decreases.

It seems to me, that for a given host pair, there is some RTT
theorectical minimum, which represents a completely empty network, and
perhaps one can observe something close to it in the samples, and maybe
a periodic ICMP ping would have been in order, particularly when the
host pair was not observed to have any traffic flowing.  

The question of queuing delay then can be answered by how much higher
the RTT is over some minimum.  (And only then, does one begin to ask
questions about GeoIP and speed of photons and speed of modulated
electron wavefronts).  
Maybe that's the point of the RTT increase/decrease discussion in
section 2.2

This paper seems to really be about increasing IW.
The conclusion that 7-20% of connections would even benefit from an
increase in IW, and that long lived connections would open their window
anyway, for me, removes the question of bufferbloat from the IW debate.

The conclusion that bloat is <100ms for 50% of samples, and <250ms
for 94% of samples is useful: as the network architect for an commercial
enterprise focused VoIP provider, those numbers are terrifying.  I think
the situation is worse, but even if it's as good as reported, we can not
afford an additional 250ms delay in the circuits :-) 

now, to read the thread.

-- 
]               Never tell me the odds!                 | ipv6 mesh networks [ 
]   Michael Richardson, Sandelman Software Works        | network architect  [ 
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [ 
	










[-- Attachment #2: Type: application/pgp-signature, Size: 307 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-09  4:53     ` David Lang
  2013-01-09  5:13       ` Jonathan Morton
@ 2013-01-09  5:32       ` Mark Allman
  1 sibling, 0 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-09  5:32 UTC (permalink / raw)
  To: David Lang; +Cc: Hal Murray, bloat

[-- Attachment #1: Type: text/plain, Size: 1650 bytes --]


> >> but if the connection from the laptop to the AP is 54M and the
> >> connection from the AP to the Internet is 1G, you are not going to
> >> have a lot of buffering taking place. You will have no buffering on
> >> the uplink side, and while you will have some buffering on the
> >> downlink side, 54M is your slowest connection and it takes a
> >> significantly large amount of data in flight to fill that for seconds.
> >
> > 54Mbps *might* be your slowest link.  It also could be somewhere before
> > incoming traffic gets anywhere close to any of the CCZ gear.  E.g., if
> > the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
> > end of the connection.
> 
> Wait a min here, from everything prior to this it was sounding like
> you were in a fiber-to-the-home experimental area that had 1G all
> the way to the houses, no DSL involved.

You noted that in the downlink direction (i.e., traffic originating at
some arbitrary place in the network that is *outside* the FTTH network)
would be bottlenecked not by the 1Gbps fiber that runs to the house, but
rather by the final 54Mbps wireless hop.  All I am saying is that you
are only half right.  We know the bottleneck will not be the 1Gbps
fiber.  It *might* be the 54Mbps wireless.  Or, it *might* be some other
link at some other point in the Internet before the traffic reaches the
1Gbps fiber that connects the house.

My example is if I originated some traffic at my house (outside the FTTH
network) that was destined for some host on the FTTH network.  I can
pump traffic from my house at < 1Mbps.  So, that last hop of 54Mbps
cannot be the bottleneck.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-09  4:53     ` David Lang
@ 2013-01-09  5:13       ` Jonathan Morton
  2013-01-09  5:32       ` Mark Allman
  1 sibling, 0 replies; 43+ messages in thread
From: Jonathan Morton @ 2013-01-09  5:13 UTC (permalink / raw)
  To: David Lang; +Cc: Hal Murray, bloat

[-- Attachment #1: Type: text/plain, Size: 1988 bytes --]

I think the point being made here was that the FTTH homes were talking to
DSL hosts via P2P a lot.

- Jonathan Morton
 On Jan 9, 2013 6:54 AM, "David Lang" <david@lang.hm> wrote:

> On Tue, 8 Jan 2013, Mark Allman wrote:
>
>  Did any of their 90 homes contained laptops connected over WiFi?
>>>>
>>>
>>> Almost certinly,
>>>
>>
>> Yeah - they nearly for sure did.  (See the note I sent to bloat@ this
>> morning.)
>>
>>  but if the connection from the laptop to the AP is 54M and the
>>> connection from the AP to the Internet is 1G, you are not going to
>>> have a lot of buffering taking place. You will have no buffering on
>>> the uplink side, and while you will have some buffering on the
>>> downlink side, 54M is your slowest connection and it takes a
>>> significantly large amount of data in flight to fill that for seconds.
>>>
>>
>> 54Mbps *might* be your slowest link.  It also could be somewhere before
>> incoming traffic gets anywhere close to any of the CCZ gear.  E.g., if
>> the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
>> end of the connection.
>>
>
> Wait a min here, from everything prior to this it was sounding like you
> were in a fiber-to-the-home experimental area that had 1G all the way to
> the houses, no DSL involved.
>
> Are we all minunderstanding this?
>
> David Lang
>
>  But, regardless, none of this matters for the results presented in the
>> paper because our measurements factor out the local residences.  Again,
>> see the paper and the note I sent this morning.  The measurements are
>> taken between our monitor (which is outside the local homes) and the
>> remote host somewhere out across the Internet.  We are measuring
>> wide-area and remote-side networks, not the local FTTH network.
>>
>> allman
>>
>>
>>
>>
>>  ______________________________**_________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/**listinfo/bloat<https://lists.bufferbloat.net/listinfo/bloat>
>

[-- Attachment #2: Type: text/html, Size: 3013 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-09  3:39 ` [Bloat] Bufferbloat Paper Mark Allman
@ 2013-01-09  5:02   ` David Lang
  2013-01-18  1:23     ` grenville armitage
  0 siblings, 1 reply; 43+ messages in thread
From: David Lang @ 2013-01-09  5:02 UTC (permalink / raw)
  To: bloat

 On Tue, 08 Jan 2013 22:39:20 -0500, Mark Allman wrote:
>> > Note the paper does not work in units of *connections* in section 
>> 2, but
>> > rather in terms of *RTT samples*.  So, nearly 5% of the RTT 
>> samples add
>> >>= 400msec to the base delay measured for the given remote (in the
>> > "residential" case).
>>
>> Hmm, yes, I was wondering about this and was unable to fully grok 
>> it:
>> what, exactly, is an RTT sample? :)
>
> One RTT measurement between the CCZ monitoring point and the remote 
> end
> host.
>
>> Incidentally, are the data extraction scripts available somewhere?
>> Might be worthwhile to distribute them as some kind of tool that
>> people with interesting vantage points could apply to get useful 
>> data?
>
> Well, they are not readily available.  I used a non-public extension 
> to
> Bro (that is not mine) to get the RTT samples.  So, that is a 
> sticking
> point.  And, then there is a ball of goop to analyze those.  If folks
> have a place they can monitor and are interested in doing so, please
> contact me.  I can probably get this in shape enough to give you.  
> But,
> I doubt I'll be able to somehow package this for general consumption 
> any
> time soon.

 I really like the idea of trying to measure latency by sniffing the 
 network and watching the time for responses.

 If this can work then I think a lot of people would be willing to put a 
 sniffer inline in their datacenter to measure this.

 How specialized is what you are running? can it be made into a 
 single-use tool that just measures and reports latency?

 David Lang

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-09  1:59   ` Mark Allman
@ 2013-01-09  4:53     ` David Lang
  2013-01-09  5:13       ` Jonathan Morton
  2013-01-09  5:32       ` Mark Allman
  0 siblings, 2 replies; 43+ messages in thread
From: David Lang @ 2013-01-09  4:53 UTC (permalink / raw)
  To: Mark Allman; +Cc: Hal Murray, bloat

On Tue, 8 Jan 2013, Mark Allman wrote:

>>> Did any of their 90 homes contained laptops connected over WiFi?
>>
>> Almost certinly,
>
> Yeah - they nearly for sure did.  (See the note I sent to bloat@ this
> morning.)
>
>> but if the connection from the laptop to the AP is 54M and the
>> connection from the AP to the Internet is 1G, you are not going to
>> have a lot of buffering taking place. You will have no buffering on
>> the uplink side, and while you will have some buffering on the
>> downlink side, 54M is your slowest connection and it takes a
>> significantly large amount of data in flight to fill that for seconds.
>
> 54Mbps *might* be your slowest link.  It also could be somewhere before
> incoming traffic gets anywhere close to any of the CCZ gear.  E.g., if
> the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
> end of the connection.

Wait a min here, from everything prior to this it was sounding like you were in 
a fiber-to-the-home experimental area that had 1G all the way to the houses, no 
DSL involved.

Are we all minunderstanding this?

David Lang

> But, regardless, none of this matters for the results presented in the
> paper because our measurements factor out the local residences.  Again,
> see the paper and the note I sent this morning.  The measurements are
> taken between our monitor (which is outside the local homes) and the
> remote host somewhere out across the Internet.  We are measuring
> wide-area and remote-side networks, not the local FTTH network.
>
> allman
>
>
>
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
       [not found] <87r4lvgss4.fsf@toke.dk>
@ 2013-01-09  3:39 ` Mark Allman
  2013-01-09  5:02   ` David Lang
  0 siblings, 1 reply; 43+ messages in thread
From: Mark Allman @ 2013-01-09  3:39 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1102 bytes --]


> > Note the paper does not work in units of *connections* in section 2, but
> > rather in terms of *RTT samples*.  So, nearly 5% of the RTT samples add
> >>= 400msec to the base delay measured for the given remote (in the
> > "residential" case).
> 
> Hmm, yes, I was wondering about this and was unable to fully grok it:
> what, exactly, is an RTT sample? :)

One RTT measurement between the CCZ monitoring point and the remote end
host. 

> Incidentally, are the data extraction scripts available somewhere?
> Might be worthwhile to distribute them as some kind of tool that
> people with interesting vantage points could apply to get useful data?

Well, they are not readily available.  I used a non-public extension to
Bro (that is not mine) to get the RTT samples.  So, that is a sticking
point.  And, then there is a ball of goop to analyze those.  If folks
have a place they can monitor and are interested in doing so, please
contact me.  I can probably get this in shape enough to give you.  But,
I doubt I'll be able to somehow package this for general consumption any
time soon.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-09  0:12 ` David Lang
@ 2013-01-09  1:59   ` Mark Allman
  2013-01-09  4:53     ` David Lang
  0 siblings, 1 reply; 43+ messages in thread
From: Mark Allman @ 2013-01-09  1:59 UTC (permalink / raw)
  To: David Lang; +Cc: Hal Murray, bloat

[-- Attachment #1: Type: text/plain, Size: 1244 bytes --]


> > Did any of their 90 homes contained laptops connected over WiFi?
> 
> Almost certinly, 

Yeah - they nearly for sure did.  (See the note I sent to bloat@ this
morning.) 

> but if the connection from the laptop to the AP is 54M and the
> connection from the AP to the Internet is 1G, you are not going to
> have a lot of buffering taking place. You will have no buffering on
> the uplink side, and while you will have some buffering on the
> downlink side, 54M is your slowest connection and it takes a
> significantly large amount of data in flight to fill that for seconds.

54Mbps *might* be your slowest link.  It also could be somewhere before
incoming traffic gets anywhere close to any of the CCZ gear.  E.g., if
the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
end of the connection.

But, regardless, none of this matters for the results presented in the
paper because our measurements factor out the local residences.  Again,
see the paper and the note I sent this morning.  The measurements are
taken between our monitor (which is outside the local homes) and the
remote host somewhere out across the Internet.  We are measuring
wide-area and remote-side networks, not the local FTTH network.

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-08 19:03 [Bloat] bufferbloat paper Hal Murray
  2013-01-08 20:28 ` Jonathan Morton
@ 2013-01-09  0:12 ` David Lang
  2013-01-09  1:59   ` Mark Allman
  1 sibling, 1 reply; 43+ messages in thread
From: David Lang @ 2013-01-09  0:12 UTC (permalink / raw)
  To: Hal Murray; +Cc: bloat

On Tue, 8 Jan 2013, Hal Murray wrote:

>> Aside from their dataset having absolutely no reflection on the reality of
>> the 99.999% of home users running at speeds two or three or *more* orders of
>> magnitude below that speed, it seems like a nice paper.
>
> Did any of their 90 homes contained laptops connected over WiFi?

Almost certinly, but if the connection from the laptop to the AP is 54M and the 
connection from the AP to the Internet is 1G, you are not going to have a lot of 
buffering taking place. You will have no buffering on the uplink side, and while 
you will have some buffering on the downlink side, 54M is your slowest 
connection and it takes a significantly large amount of data in flight to fill 
that for seconds.

If your 54M wireless link is connected to a 768K DSL uplink (a much more typical 
connection), then it's very easy for the uplink side to generate many seconds 
worth of queueing delays, both from the high disparity in speeds and from the 
fact that the uplink is so slow.

David Lang

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08 13:55     ` Mark Allman
@ 2013-01-09  0:03       ` David Lang
  2013-01-10 13:01         ` Mark Allman
  2013-01-09 20:14       ` Michael Richardson
  1 sibling, 1 reply; 43+ messages in thread
From: David Lang @ 2013-01-09  0:03 UTC (permalink / raw)
  To: bloat

 On Tue, 08 Jan 2013 08:55:10 -0500, Mark Allman wrote:
> Let me make a few general comments here ...
>
> (0) The goal is to bring *some* *data* to the conversation.  To
>     understand the size and scope of bufferbloat problem it seems to 
> me
>     we need data.

 no disagreement here.

> (1) My goal is to make some observations of the queuing (/delay
>     variation) in the non-FTTH portion of the network path.  As folks
>     have pointed out, its unlikely bufferbloat is much of a problem 
> in
>     the 1Gbps portion of the network I monitor.
>
> (2) The network I am monitoring looks like this ...
>
>        LEH -> IHR -> SW -> Internet -> REH
>
>     where, "LEH" is the local end host and "IHR" is the in-home 
> router
>     provided by the FTTH project.  The connection between the LEH and
>     the IHR can either be wired (at up to 1Gbps) or wireless (at much
>     less than 1Gbps, but I forget the actual wireless technology used 
> on
>     the IHR).  The IHRs are all run into a switch (SW) at 1Gbps.  The
>     switch connects to the Internet via a 1Gbps link (so, this is a
>     theoretical bottleneck right here ...).  The "REH" is the remote 
> end
>     host.  We monitor via mirroring on SW.
>
>     The delay we measure is from SW to REH and back.  So, the fact 
> that
>     this is a 1Gbps environment for local users is really not 
> material.
>     The REHs are whatever the local users decide to talk to.  I have 
> no
>     idea what the edge bandwidth on the remote side is, but I presume 
> it
>     is general not a Gbps (especially for the residential set).
>
>     So, if you wrote off the paper after the sentence that noted the
>     data was collected within an FTTH project, I'd invite you to read
>     further.

 The issue is that if the home user has a 1G uplink to you, and then you 
 hae a 1G uplink to the Internet, there is not going to be very much if 
 any congestion in place. The only place where you are going to have any 
 buffering is in your 1G uplink to the Internet (and only if there is 
 enough traffic to cause congestion here)

 In the 'normal' residential situation, the LEH -> THR connection is 
 probably 1G if wired, but the THR -> SW connection is likely to be <1M. 
 Therefor the THR ends up buffering the outbound traffic.
 
> (3) This data is not ideal.  Ideally I'd like to directly measure 
> queues
>     in a bazillion places.  That'd be fabulous.  But, I am working 
> with
>     what I have.  I have traces that offer windows into the actual 
> queue
>     occupancy when the local users I monitor engage particular remote
>     endpoints.  Is this representative of the delays I'd find when 
> the
>     local users are not engaging the remote end system?  I have no
>     idea.  I'd certainly like to know.  But, the data doesn't tell 
> me.
>     I am reporting what I have.  It is something.  And, it is more 
> than
>     I have seen reported anywhere else.  Folks should go collect more
>     data.
>
>     (And, note, this is not a knock on the folks---some of them my
>     colleagues---who have quite soundly assessed potential queue 
> sizes
>     by trying to jam as much into the queue as possible and measuring
>     the worst case delays.  That is well and good.  It establishes a
>     bound and that there is the potential for problems.  But, it does
>     not speak to what queue occupancy actually looks like.  This 
> latter
>     is what I am after.)

 The biggest problem I had with the paper was that it seemed to be 
 taking the tone "we measured and didn't find anything in this network, 
 so bufferbloat is not a real problem"

 It may not be a problem in your network, but your network is very 
 unusual due to the high speed links to the end-users.

 Even there, the 400ms delays that you found could be indications of the 
 problem (how bad their impact is is hard to say. If 5% of the packets 
 have 400ms latency, that would seem to me to be rather significant. It's 
 not the collapse that other people have been reporting, but given your 
 high bandwidth, I wouldn't expect to see that sort of collapse take 
 place.

 David Lang

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
  2013-01-08 19:03 [Bloat] bufferbloat paper Hal Murray
@ 2013-01-08 20:28 ` Jonathan Morton
  2013-01-09  0:12 ` David Lang
  1 sibling, 0 replies; 43+ messages in thread
From: Jonathan Morton @ 2013-01-08 20:28 UTC (permalink / raw)
  To: Hal Murray; +Cc: bloat


On 8 Jan, 2013, at 9:03 pm, Hal Murray wrote:

> Any ideas on what happened at 120 seconds?  Is that a pattern I should 
> recognize?

That looks to me like the link changed to a slower speed for a few seconds.  That can happen pretty much at random in a wireless environment, possibly in response to a statistical fluke on the BER, which in turn might be triggered by a lightning strike a thousand miles away.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] bufferbloat paper
@ 2013-01-08 19:03 Hal Murray
  2013-01-08 20:28 ` Jonathan Morton
  2013-01-09  0:12 ` David Lang
  0 siblings, 2 replies; 43+ messages in thread
From: Hal Murray @ 2013-01-08 19:03 UTC (permalink / raw)
  To: bloat; +Cc: Hal Murray


> Aside from their dataset having absolutely no reflection on the reality of
> the 99.999% of home users running at speeds two or three or *more* orders of
> magnitude below that speed, it seems like a nice paper. 

Did any of their 90 homes contained laptops connected over WiFi? 


> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png) from
> a computer tethered to a Samsung Galaxy Nexus running Android 4.0.4 on
> Verizon LTE service, taken just now in Cambridge, Mass. 

Neat.  Thanks.

Any ideas on what happened at 120 seconds?  Is that a pattern I should 
recognize?

Is there an event that triggers it?  Is it something as simple as a single 
lost packet?



-- 
These are my opinions.  I hate spam.




^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  1:54 ` Stephen Hemminger
  2013-01-08  2:15   ` Oliver Hohlfeld
  2013-01-08 12:44   ` Toke Høiland-Jørgensen
@ 2013-01-08 17:22   ` Dave Taht
  2 siblings, 0 replies; 43+ messages in thread
From: Dave Taht @ 2013-01-08 17:22 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: bloat

Hey, guys, chill.

I'm sorry if my first comment at the paper's dataset sounded overly
sarcastic. I was equally sincere in calling it a "good paper", as the
analysis of the dataset seemed largely sound at first glance... but I
have to think about it for a while a while longer, and hopefully
suggest additional/further lines of research.

I'm glad the Q/A session is taking place here, but I'm terribly behind
on my email in general....

On Mon, Jan 7, 2013 at 5:54 PM, Stephen Hemminger <shemminger@vyatta.com> wrote:
> The tone of the paper is a bit of "if academics don't analyze it to death
> it must not exist". The facts are interesting, but the interpretation ignores
> the human element. If human's perceive delay "Daddy the Internet is slow", then
> they will change their behavior to avoid the problem: "it hurts when I download,
> so I will do it later".
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08 12:44   ` Toke Høiland-Jørgensen
  2013-01-08 13:55     ` Mark Allman
@ 2013-01-08 14:04     ` Mark Allman
  1 sibling, 0 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-08 14:04 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1086 bytes --]


> graphs, ~5% of connections to "residential" hosts exhibit added delays
> of >=400 milliseconds, a delay that is certainly noticeable and would
> make interactive applications (gaming, voip etc) pretty much unusable.

Note the paper does not work in units of *connections* in section 2, but
rather in terms of *RTT samples*.  So, nearly 5% of the RTT samples add
>= 400msec to the base delay measured for the given remote (in the
"residential" case).

(I am not disagreeing that 400msec of added delay would be noticeable.
I am simply stating what the data actually shows.)

> Now, I may be jumping to conclusions here, but I couldn't find anything
> about how their samples were distributed. 

(I don't follow this comment ... distributed in what fashion?)

> It would be interesting if a large-scale test like this could flush
> out how big a percentage of hosts do occasionally experience
> bufferbloat, and how many never do.

I agree and this could be done with our data.  (In general, we could go
much deeper into the data on hand ... the paper is an initial foray.)

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08 12:44   ` Toke Høiland-Jørgensen
@ 2013-01-08 13:55     ` Mark Allman
  2013-01-09  0:03       ` David Lang
  2013-01-09 20:14       ` Michael Richardson
  2013-01-08 14:04     ` Mark Allman
  1 sibling, 2 replies; 43+ messages in thread
From: Mark Allman @ 2013-01-08 13:55 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 2703 bytes --]


Let me make a few general comments here ...

(0) The goal is to bring *some* *data* to the conversation.  To
    understand the size and scope of bufferbloat problem it seems to me
    we need data.

(1) My goal is to make some observations of the queuing (/delay
    variation) in the non-FTTH portion of the network path.  As folks
    have pointed out, its unlikely bufferbloat is much of a problem in
    the 1Gbps portion of the network I monitor.

(2) The network I am monitoring looks like this ...

       LEH -> IHR -> SW -> Internet -> REH

    where, "LEH" is the local end host and "IHR" is the in-home router
    provided by the FTTH project.  The connection between the LEH and
    the IHR can either be wired (at up to 1Gbps) or wireless (at much
    less than 1Gbps, but I forget the actual wireless technology used on
    the IHR).  The IHRs are all run into a switch (SW) at 1Gbps.  The
    switch connects to the Internet via a 1Gbps link (so, this is a
    theoretical bottleneck right here ...).  The "REH" is the remote end
    host.  We monitor via mirroring on SW.

    The delay we measure is from SW to REH and back.  So, the fact that
    this is a 1Gbps environment for local users is really not material.
    The REHs are whatever the local users decide to talk to.  I have no
    idea what the edge bandwidth on the remote side is, but I presume it
    is general not a Gbps (especially for the residential set).

    So, if you wrote off the paper after the sentence that noted the
    data was collected within an FTTH project, I'd invite you to read
    further.

(3) This data is not ideal.  Ideally I'd like to directly measure queues
    in a bazillion places.  That'd be fabulous.  But, I am working with
    what I have.  I have traces that offer windows into the actual queue
    occupancy when the local users I monitor engage particular remote
    endpoints.  Is this representative of the delays I'd find when the
    local users are not engaging the remote end system?  I have no
    idea.  I'd certainly like to know.  But, the data doesn't tell me.
    I am reporting what I have.  It is something.  And, it is more than
    I have seen reported anywhere else.  Folks should go collect more
    data.

    (And, note, this is not a knock on the folks---some of them my
    colleagues---who have quite soundly assessed potential queue sizes
    by trying to jam as much into the queue as possible and measuring
    the worst case delays.  That is well and good.  It establishes a
    bound and that there is the potential for problems.  But, it does
    not speak to what queue occupancy actually looks like.  This latter
    is what I am after.)

allman




[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  1:54 ` Stephen Hemminger
  2013-01-08  2:15   ` Oliver Hohlfeld
@ 2013-01-08 12:44   ` Toke Høiland-Jørgensen
  2013-01-08 13:55     ` Mark Allman
  2013-01-08 14:04     ` Mark Allman
  2013-01-08 17:22   ` Dave Taht
  2 siblings, 2 replies; 43+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-01-08 12:44 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 1536 bytes --]

Stephen Hemminger <shemminger@vyatta.com>
writes:

> The tone of the paper is a bit of "if academics don't analyze it to
> death it must not exist". The facts are interesting, but the
> interpretation ignores the human element. If human's perceive delay
> "Daddy the Internet is slow", then they will change their behavior to
> avoid the problem: "it hurts when I download, so I will do it later".

Well severe latency spikes caused by bufferbloat are relatively
transient in nature. If connections were constantly severely bloated the
internet would be unusable and the problem would probably (hopefully?)
have been spotted and fixed long ago. As far as I can tell from their
graphs, ~5% of connections to "residential" hosts exhibit added delays
of >=400 milliseconds, a delay that is certainly noticeable and would
make interactive applications (gaming, voip etc) pretty much unusable.

Now, I may be jumping to conclusions here, but I couldn't find anything
about how their samples were distributed. However, assuming the worst,
if these are 5% of all connections to all peers, each peer will have a
latency spike of at least 400 milliseconds for one second every 20
seconds (on average). Which is certainly enough to make a phone call
choppy, or get you killed in a fast-paced FPS.

It would be interesting if a large-scale test like this could flush out
how big a percentage of hosts do occasionally experience bufferbloat,
and how many never do.

-Toke

-- 
Toke Høiland-Jørgensen
toke@toke.dk

[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  2:04   ` Mark Watson
  2013-01-08  2:24     ` David Lang
@ 2013-01-08  4:52     ` Mark Watson
  1 sibling, 0 replies; 43+ messages in thread
From: Mark Watson @ 2013-01-08  4:52 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat


On Jan 7, 2013, at 4:33 PM, Dave Taht wrote:

> "We use a packet trace collection taken from the Case Con-
> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
> work which connects roughly 90 homes adjacent to Case
> Western Reserve University’s campus with **bi-directional 1 Gbps
> links**. "
> 
> Aside from their dataset having absolutely no reflection on the
> reality of the 99.999% of home users running at speeds two or three or
> *more* orders of magnitude below that speed, it seems like a nice
> paper.

Actually they analyze the delay between the measurement point in CCZ and the *remote* peer, splitting out residential and non-residential peers. 57% of the peers are residential. Sounds like a lot of the traffic is p2p. You could argue that the remote, residential p2p peers are not on "typical" connections and that this traffic doesn't follow the time-of-day usage patterns expected for applications with a live human in front of them.

...Mark

> 
> 
> On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>> 
>> FYI: "Comments on Bufferbloat" paper from Mark Allman
>> 
>> 
>> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>> 
>> 
>> Cheers, Hagen
>> 
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> 
> -- 
> Dave Täht
> 
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  2:04   ` Mark Watson
@ 2013-01-08  2:24     ` David Lang
  2013-01-09 20:08       ` Michael Richardson
  2013-01-08  4:52     ` Mark Watson
  1 sibling, 1 reply; 43+ messages in thread
From: David Lang @ 2013-01-08  2:24 UTC (permalink / raw)
  To: Mark Watson; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1410 bytes --]

On Tue, 8 Jan 2013, Mark Watson wrote:

> On Jan 7, 2013, at 4:33 PM, Dave Taht wrote:
>
>> "We use a packet trace collection taken from the Case Con-
>> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
>> work which connects roughly 90 homes adjacent to Case
>> Western Reserve University’s campus with **bi-directional 1 Gbps
>> links**. "
>>
>> Aside from their dataset having absolutely no reflection on the
>> reality of the 99.999% of home users running at speeds two or three or
>> *more* orders of magnitude below that speed, it seems like a nice
>> paper.
>
> Actually they analyze the delay between the measurement point in CCZ and the 
> *remote* peer, splitting out residential and non-residential peers. 57% of the 
> peers are residential. Sounds like a lot of the traffic is p2p. You could 
> argue that the remote, residential p2p peers are not on "typical" connections 
> and that this traffic doesn't follow the time-of-day usage patterns expected 
> for applications with a live human in front of them.

But if the "remote peer" is on a 1Gbps link, that hardly reflects normal 
conditions.

typical conditions are

          1G            1M
desktop -----firewall ---- Internet

it's this transition from 1G to 1M that causes data to be buffered. If you have 
1G on both sides of the home firewall, then it's unlikely that very much data is 
going to be buffered there.

David Lang

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  1:54 ` Stephen Hemminger
@ 2013-01-08  2:15   ` Oliver Hohlfeld
  2013-01-08 12:44   ` Toke Høiland-Jørgensen
  2013-01-08 17:22   ` Dave Taht
  2 siblings, 0 replies; 43+ messages in thread
From: Oliver Hohlfeld @ 2013-01-08  2:15 UTC (permalink / raw)
  To: bloat

On Mon, Jan 07, 2013 at 05:54:17PM -0800, Stephen Hemminger wrote:
> The tone of the paper is a bit of "if academics don't analyze it to death
> it must not exist".

This does not reflect statements made in the paper; The paper
does acknowledge the /existence/ of the problem.

What the paper discusses is the frequency / extend of the problem.
Using data representing residential users in multiple countries,
I can basically confirm the papers statement that high rtts are not 
widely observed. The causes for high rtts are multifold and include
more than just bufferbloat. My data also suggests that it is
a problem that does not frequently occur. One reason being that
users do not often utilize their uplink.

> The facts are interesting, but the interpretation ignores
> the human element.

Indeed.

> If human's perceive delay "Daddy the Internet is slow", then
> they will change their behavior to avoid the problem: "it hurts when I download,
> so I will do it later".

Speculative, but one interpretation. Chances that downloads hurt
are small.

Oliver

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  0:33 ` Dave Taht
  2013-01-08  0:40   ` David Lang
@ 2013-01-08  2:04   ` Mark Watson
  2013-01-08  2:24     ` David Lang
  2013-01-08  4:52     ` Mark Watson
  1 sibling, 2 replies; 43+ messages in thread
From: Mark Watson @ 2013-01-08  2:04 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat


On Jan 7, 2013, at 4:33 PM, Dave Taht wrote:

> "We use a packet trace collection taken from the Case Con-
> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
> work which connects roughly 90 homes adjacent to Case
> Western Reserve University’s campus with **bi-directional 1 Gbps
> links**. "
> 
> Aside from their dataset having absolutely no reflection on the
> reality of the 99.999% of home users running at speeds two or three or
> *more* orders of magnitude below that speed, it seems like a nice
> paper.

Actually they analyze the delay between the measurement point in CCZ and the *remote* peer, splitting out residential and non-residential peers. 57% of the peers are residential. Sounds like a lot of the traffic is p2p. You could argue that the remote, residential p2p peers are not on "typical" connections and that this traffic doesn't follow the time-of-day usage patterns expected for applications with a live human in front of them.

...Mark

> 
> 
> On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>> 
>> FYI: "Comments on Bufferbloat" paper from Mark Allman
>> 
>> 
>> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>> 
>> 
>> Cheers, Hagen
>> 
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> 
> -- 
> Dave Täht
> 
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
  2013-01-08  0:33 ` Dave Taht
@ 2013-01-08  1:54 ` Stephen Hemminger
  2013-01-08  2:15   ` Oliver Hohlfeld
                     ` (2 more replies)
  2013-01-09 20:05 ` Michael Richardson
  2 siblings, 3 replies; 43+ messages in thread
From: Stephen Hemminger @ 2013-01-08  1:54 UTC (permalink / raw)
  To: Hagen Paul Pfeifer; +Cc: bloat

The tone of the paper is a bit of "if academics don't analyze it to death
it must not exist". The facts are interesting, but the interpretation ignores
the human element. If human's perceive delay "Daddy the Internet is slow", then
they will change their behavior to avoid the problem: "it hurts when I download,
so I will do it later".


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-08  0:33 ` Dave Taht
@ 2013-01-08  0:40   ` David Lang
  2013-01-08  2:04   ` Mark Watson
  1 sibling, 0 replies; 43+ messages in thread
From: David Lang @ 2013-01-08  0:40 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1277 bytes --]

When your connections are that fast, there's very little buffering going on, 
because your WAN is just as fast as your LAN.

Queuing takes place when the next hop has less bandwidth available than the 
prior hop.

However, it would be interesting to see if someone coudl take the tools they 
used, put them in a datacenter somewhere and analyse the results.

David Lang

On Mon, 7 Jan 2013, Dave Taht wrote:

> "We use a packet trace collection taken from the Case Con-
> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
> work which connects roughly 90 homes adjacent to Case
> Western Reserve University’s campus with **bi-directional 1 Gbps
> links**. "
>
> Aside from their dataset having absolutely no reflection on the
> reality of the 99.999% of home users running at speeds two or three or
> *more* orders of magnitude below that speed, it seems like a nice
> paper.
>
>
> On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>>
>> FYI: "Comments on Bufferbloat" paper from Mark Allman
>>
>>
>> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>>
>>
>> Cheers, Hagen
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Bloat] Bufferbloat Paper
  2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
@ 2013-01-08  0:33 ` Dave Taht
  2013-01-08  0:40   ` David Lang
  2013-01-08  2:04   ` Mark Watson
  2013-01-08  1:54 ` Stephen Hemminger
  2013-01-09 20:05 ` Michael Richardson
  2 siblings, 2 replies; 43+ messages in thread
From: Dave Taht @ 2013-01-08  0:33 UTC (permalink / raw)
  To: Hagen Paul Pfeifer; +Cc: bloat

"We use a packet trace collection taken from the Case Con-
nection Zone (CCZ) [1] experimental fiber-to-the-home net-
work which connects roughly 90 homes adjacent to Case
Western Reserve University’s campus with **bi-directional 1 Gbps
links**. "

Aside from their dataset having absolutely no reflection on the
reality of the 99.999% of home users running at speeds two or three or
*more* orders of magnitude below that speed, it seems like a nice
paper.


On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>
> FYI: "Comments on Bufferbloat" paper from Mark Allman
>
>
> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>
>
> Cheers, Hagen
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Bloat] Bufferbloat Paper
@ 2013-01-07 23:37 Hagen Paul Pfeifer
  2013-01-08  0:33 ` Dave Taht
                   ` (2 more replies)
  0 siblings, 3 replies; 43+ messages in thread
From: Hagen Paul Pfeifer @ 2013-01-07 23:37 UTC (permalink / raw)
  To: bloat


FYI: "Comments on Bufferbloat" paper from Mark Allman


http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf


Cheers, Hagen


^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2013-01-18 22:00 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-08  7:35 [Bloat] bufferbloat paper Ingemar Johansson S
2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
2013-01-08 12:19   ` Ingemar Johansson S
2013-01-08 12:44     ` Keith Winstein
2013-01-08 13:19       ` Ingemar Johansson S
2013-01-08 15:29         ` dpreed
2013-01-08 16:40           ` Mark Allman
2013-01-09 14:07   ` Michael Richardson
2013-01-10  7:37     ` Keith Winstein
2013-01-10 13:46       ` Michael Richardson
2013-01-08 15:04 ` dpreed
2013-01-18 22:00 ` [Bloat] " Haiqing Jiang
     [not found] <87r4lvgss4.fsf@toke.dk>
2013-01-09  3:39 ` [Bloat] Bufferbloat Paper Mark Allman
2013-01-09  5:02   ` David Lang
2013-01-18  1:23     ` grenville armitage
  -- strict thread matches above, loose matches on Subject: below --
2013-01-08 19:03 [Bloat] bufferbloat paper Hal Murray
2013-01-08 20:28 ` Jonathan Morton
2013-01-09  0:12 ` David Lang
2013-01-09  1:59   ` Mark Allman
2013-01-09  4:53     ` David Lang
2013-01-09  5:13       ` Jonathan Morton
2013-01-09  5:32       ` Mark Allman
2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
2013-01-08  0:33 ` Dave Taht
2013-01-08  0:40   ` David Lang
2013-01-08  2:04   ` Mark Watson
2013-01-08  2:24     ` David Lang
2013-01-09 20:08       ` Michael Richardson
2013-01-08  4:52     ` Mark Watson
2013-01-08  1:54 ` Stephen Hemminger
2013-01-08  2:15   ` Oliver Hohlfeld
2013-01-08 12:44   ` Toke Høiland-Jørgensen
2013-01-08 13:55     ` Mark Allman
2013-01-09  0:03       ` David Lang
2013-01-10 13:01         ` Mark Allman
2013-01-09 20:14       ` Michael Richardson
2013-01-09 20:19         ` Mark Allman
2013-01-09 20:31           ` Michael Richardson
2013-01-10 18:05             ` Mark Allman
2013-01-08 14:04     ` Mark Allman
2013-01-08 17:22   ` Dave Taht
2013-01-09 20:05 ` Michael Richardson
2013-01-09 20:14   ` Mark Allman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox