* Re: [Bloat] [e2e] bufferbloat paper
@ 2013-01-10 13:48 dpreed
0 siblings, 0 replies; 11+ messages in thread
From: dpreed @ 2013-01-10 13:48 UTC (permalink / raw)
To: Keith Winstein; +Cc: end2end-interest, bloat
[-- Attachment #1: Type: text/plain, Size: 3199 bytes --]
These observations are easily explained by a) excessive refusal to signal congestion on each separate link (multiple seconds of buffering without drops or ECN), plus b) a contended upstream bottleneck.
In other words, exactly the same phenomenon encountered in Comcast's "DOCSIS plant".
I appreciate the actual willingness to look at data, despite the lack of cooperation of Verizon Wireless in providing access to the actual design and its parameters.
True expermental work, rather than presuming this has to do with "interference" in the radio space, and not bothering to verify that. Yay!
However, I worry aboutect those with a huge personal investment in seeing "wireless" as different, and in promoting research based on false assumptions will continue to try to explain the observations with bizarre and complex explanations. If there is a different explanation, develop an experiment that will discriminate between that hypothesis and the one above (the bufferbloat is important on LTE systems hypothesis), and *do the experimental measurement*.
-----Original Message-----
From: "Keith Winstein" <keithw@mit.edu>
Sent: Thursday, January 10, 2013 2:37am
To: "Michael Richardson" <mcr@sandelman.ca>
Cc: "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>, "end2end-interest@postel.org" <end2end-interest@postel.org>, "mallman@icir.org" <mallman@icir.org>
Subject: Re: [e2e] [Bloat] bufferbloat paper
Hello Michael,
On Wed, Jan 9, 2013 at 9:07 AM, Michael Richardson <mcr@sandelman.ca> wrote:
> Have you considered repeating your test with two phones?
Yes, we have tried up to four phones at the same time.
> Can the download on phone1 affect the latency seen by a second phone?
In our experience, a download on phone1 will not affect the unloaded
latency seen by phone2. The cell towers appear to use a per-UE
(per-phone) queue on uplink and downlink. (This is similar to what a
commodity cable modem user sees -- I don't get long delays just
because my neighbor is saturating his uplink or downlink and causing a
standing queue for himself.)
However, a download on phone1 can affect the average throughput seen
by phone2 when it is saturating its link, suggesting that the two
phones are contending for the same limited resource (timeslices and
OFDM resource blocks, or possibly just backhaul throughput).
> Obviously the phones should be located right next to each other, with
> some verification that they are actually associated to the same tower.
This is harder than we thought it would be -- the phones have a
tendency to wander around rapidly among cell IDs (sometimes switching
several times in a minute). We're not sure if the different cell IDs
really represent different towers (we doubt it) or maybe just
different LTE channels or logical channels. I understand in LTE it is
possible for multiple towers to cooperate to receive one packet, so
the story may be more complicated.
In practice it is possible to get four phones to "hold still" on the
same cell ID for five minutes to do a test, but it is a bit like
herding cats and requires some careful placement and luck.
Best regards,
Keith
[-- Attachment #2: Type: text/html, Size: 4329 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-10 7:37 ` Keith Winstein
@ 2013-01-10 13:46 ` Michael Richardson
0 siblings, 0 replies; 11+ messages in thread
From: Michael Richardson @ 2013-01-10 13:46 UTC (permalink / raw)
To: Keith Winstein; +Cc: bloat, end2end-interest
Thanks for the reply, comments below.
>>>>> "Keith" == Keith Winstein <keithw@mit.edu> writes:
>> Have you considered repeating your test with two phones?
Keith> Yes, we have tried up to four phones at the same time.
>> Can the download on phone1 affect the latency seen by a second phone?
Keith> In our experience, a download on phone1 will not affect the unloaded
Keith> latency seen by phone2. The cell towers appear to use a per-UE
Keith> (per-phone) queue on uplink and downlink. (This is similar to what a
Keith> commodity cable modem user sees -- I don't get long delays just
Keith> because my neighbor is saturating his uplink or downlink and
Keith> causing a
Keith> standing queue for himself.)
This is good news of a sort.
It means that there is no shared xmit queue on the tower, and that the
4G/LTE/whatever-they-call-it-today business of moving voice to VoIP is
going to do okay.
The question then becomes: how to get all of one's low-latency traffic
onto that second channel!
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-09 14:07 ` Michael Richardson
@ 2013-01-10 7:37 ` Keith Winstein
2013-01-10 13:46 ` Michael Richardson
0 siblings, 1 reply; 11+ messages in thread
From: Keith Winstein @ 2013-01-10 7:37 UTC (permalink / raw)
To: Michael Richardson; +Cc: bloat, end2end-interest
Hello Michael,
On Wed, Jan 9, 2013 at 9:07 AM, Michael Richardson <mcr@sandelman.ca> wrote:
> Have you considered repeating your test with two phones?
Yes, we have tried up to four phones at the same time.
> Can the download on phone1 affect the latency seen by a second phone?
In our experience, a download on phone1 will not affect the unloaded
latency seen by phone2. The cell towers appear to use a per-UE
(per-phone) queue on uplink and downlink. (This is similar to what a
commodity cable modem user sees -- I don't get long delays just
because my neighbor is saturating his uplink or downlink and causing a
standing queue for himself.)
However, a download on phone1 can affect the average throughput seen
by phone2 when it is saturating its link, suggesting that the two
phones are contending for the same limited resource (timeslices and
OFDM resource blocks, or possibly just backhaul throughput).
> Obviously the phones should be located right next to each other, with
> some verification that they are actually associated to the same tower.
This is harder than we thought it would be -- the phones have a
tendency to wander around rapidly among cell IDs (sometimes switching
several times in a minute). We're not sure if the different cell IDs
really represent different towers (we doubt it) or maybe just
different LTE channels or logical channels. I understand in LTE it is
possible for multiple towers to cooperate to receive one packet, so
the story may be more complicated.
In practice it is possible to get four phones to "hold still" on the
same cell ID for five minutes to do a test, but it is a bit like
herding cats and requires some careful placement and luck.
Best regards,
Keith
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
2013-01-08 12:19 ` Ingemar Johansson S
@ 2013-01-09 14:07 ` Michael Richardson
2013-01-10 7:37 ` Keith Winstein
1 sibling, 1 reply; 11+ messages in thread
From: Michael Richardson @ 2013-01-09 14:07 UTC (permalink / raw)
To: Keith Winstein; +Cc: bloat, end2end-interest
Keith, thank you for this work.
I don't know enough about how LTE towers buffer things, so my question.
Have you considered repeating your test with two phones?
Can the download on phone1 affect the latency seen by a second phone?
Obviously the phones should be located right next to each other, with
some verification that they are actually associated to the same tower.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 15:29 ` dpreed
@ 2013-01-08 16:40 ` Mark Allman
0 siblings, 0 replies; 11+ messages in thread
From: Mark Allman @ 2013-01-08 16:40 UTC (permalink / raw)
To: dpreed; +Cc: Ingemar Johansson S, Keith Winstein, end2end-interest, bloat
[-- Attachment #1: Type: text/plain, Size: 1416 bytes --]
David-
I completely agree with your "measure it" notion. That is one of the
points of my paper.
> First, it's important to measure the "right thing" - which in this
> case is "how much queueing *delay* builds up in the bottleneck link
> under load"
That said, as often is the case there is no "right thing", but rather a
number of "right thingS" to measure. And/or, when you go off to measure
the "right thing" there are additional facets that come to light that
one must understand.
Certainly understanding how much possible delay can build up in a queue
is a worthwhile thing to measure. There are good results of this in the
netalyzr paper. Further, as Injong mentioned there are results of this
potential queue buildup from mobile networks in his recent IMC paper.
However, the point of my contribution is that if this queue buildup
happens only in the middle of the night on the second Sunday of every
leap year for 1.5sec, that'd be good to know and probably it'd make the
behavior something to not specifically engineer around in a best effort
network. So, then, lets ask about the scale and scope of this behavior
and answer it with systematic, empirical assessment instead of anecdotes
and extrapolation. Surely that is not a controversial position.
My results are modest at best. Other people have different (probably
better) vantage points. They should look at some data, too.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 13:19 ` Ingemar Johansson S
@ 2013-01-08 15:29 ` dpreed
2013-01-08 16:40 ` Mark Allman
0 siblings, 1 reply; 11+ messages in thread
From: dpreed @ 2013-01-08 15:29 UTC (permalink / raw)
To: Ingemar Johansson S; +Cc: Keith Winstein, bloat, end2end-interest
[-- Attachment #1: Type: text/plain, Size: 11026 bytes --]
Re: "the only thing that counts is peak throughput" - it's a pretty cynical stance to say "I'm a professional engineer, but the marketing guys don't have a clue, so I'm not going to build a usable system".
It's even worse when fellow engineers *disparage* or downplay the work of engineers who are actually trying hard to fix this across the entire Internet.
Does competition require such foolishness? Have any of the folks who work for operators and equipment suppliers followed Richard Woundy's lead (he is SVP at Comcast) and tried to *fix* the problem and get the fix deployed. Richard is an engineer, and took the time to develop a proposed fix to DOCSIS 3.0, and also to write a "best practices" document about how to deploy that fix. The one thing he could not do is get Comcast or its competitors to invest money in deploying the fix more rapidly.
First, it's important to measure the "right thing" - which in this case is "how much queueing *delay* builds up in the bottleneck link under load" and how bad is the user experience when that queueing delay stabilizes at more than about 20 msec.
That cannot be determined by measuring throughput, which is all the operators measure. (I have the sworn testimony of every provider in Canada when asked by the CRTC "do you measure latency on your internet service", the answer was uniformly - we measure throughput *only*, and by Little's Lemma we can determine latency).
Engineers actually have a positive duty to society, not just to profits. And actually, in this case, better service *would* lead to more profits! Not directly, but because there is competition for experience, even more than for "bitrate", despite the claims of engineers.
So talk to your CEO's. When I've done so, they say they have *never* heard of the issue. Maybe that's due to denial throughout the organization.
(by the way, what woke Comcast up was getting hauled in front of the FCC for deploying DPI-based RST injection that disrupted large classes of connections - because they had not realized what their problem was, and the marketers wanted to blame "pirates" for clogging the circuits - for which claim they had no data other than self-serving and proprietary "studies" from the vendors like Sandvine and Ellacoya).
Actual measurements of actual network behavior revealed the bufferbloat phenomenon was the cause of disruptive events due to load in *every* case observed by me, and I've looked at a lot. It used to happen on Frame Relay links all the time, and in datacenter TCP/IP internal deployments.
So measure first. Measure the right thing (latency growth under load). Ask "why is this happening?" and don't jump to the non sequitur (pirates or "interference") without proving that the non sequitur actually explains the entire phenomenon (something Comcast failed to do, instead reasoning from anecdotal links between bittorrent and the problem.
And then when your measurements are right, and you can demonstrate a solution that *works* (rather than something that in academia would be an "interesting Ph.D. proposal"), then deploy it and monitor it.
-----Original Message-----
From: "Ingemar Johansson S" <ingemar.s.johansson@ericsson.com>
Sent: Tuesday, January 8, 2013 8:19am
To: "Keith Winstein" <keithw@mit.edu>
Cc: "mallman@icir.org" <mallman@icir.org>, "end2end-interest@postel.org" <end2end-interest@postel.org>, "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
Subject: Re: [e2e] bufferbloat paper
OK...
Likely means that AQM is not turned on in the eNodeB, can't be 100% sure though but it seems so.
At least one company I know of offers AQM in eNodeB. However one problem seems to be that the only thing that counts is peak throughput, you have probably too seen these "up to X Mbps" slogans. Competition is fierce snd for this reason it could be tempting to turn off AQM as it may reduce peak throughput slightly. I know and most people on these mailing lists knows that peak throughput is the "mexapixels" of the internet, one need to address other aspects in the benchmarks.
/Ingemar
> -----Original Message-----
> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 13:44
> To: Ingemar Johansson S
> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> mallman@icir.org
> Subject: Re: [e2e] bufferbloat paper
>
> Hello Ingemar,
>
> Thanks for your feedback and your own graph.
>
> This is testing the LTE downlink, not the uplink. It was a TCP download.
>
> There was zero packet loss on the ICMP pings. I did not measure the TCP
> flow itself but I suspect packet loss was minimal if not also zero.
>
> Best,
> Keith
>
> On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S
> <ingemar.s.johansson@ericsson.com> wrote:
> > Hi
> >
> > Interesting graph, thanks for sharing it.
> > It is likely that the delay is only limited by TCPs maximum congestion
> window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s,
> giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at
> other time instants seems to give a similar figure.
> > Do you see any packet loss ?
> >
> > The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as
> the packets are buffered there.
> > The eNodeB does not buffer up packets in UL* so I would in this particular
> case argue that the problem is best solved in the terminal.
> > Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing
> that is standardized also I cannot tell how feasible it is.
> >
> > /Ingemar
> >
> > BTW... UL = uplink
> > * RLC-AM retransmissions can be said to cause delay in the eNodeB but
> then again the main problem is that packets are being queued up in the
> terminals sendbuffer. The MAC layer HARQ can too cause some delay but
> this is a necessity to get an optimal performance for LTE, moreover the
> added delay due to HARQ reTx is marginal in this context.
> >
> >> -----Original Message-----
> >> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of
> >> Keith Winstein
> >> Sent: den 8 januari 2013 11:42
> >> To: Ingemar Johansson S
> >> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> >> mallman@icir.org
> >> Subject: Re: [e2e] bufferbloat paper
> >>
> >> I'm sorry to report that the problem is not (in practice) better on
> >> LTE, even though the standard may support features that could be used
> >> to mitigate the problem.
> >>
> >> Here is a plot (also at
> >> http://web.mit.edu/keithw/www/verizondown.png)
> >> from a computer tethered to a Samsung Galaxy Nexus running Android
> >> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
> >>
> >> The phone was stationary during the test and had four bars (a full
> >> signal) of "4G" service. The computer ran a single full-throttle TCP
> >> CUBIC download from one well-connected but unremarkable Linux host
> >> (ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the
> >> same tethered LTE interface. There were zero lost pings during the
> >> entire test
> >> (606/606 delivered).
> >>
> >> The RTT grows to 1-2 seconds and stays stable in that region for most
> >> of the test, except for one 12-second period of >5 seconds RTT. We
> >> have also tried measuring only "one-way delay" (instead of RTT) by
> >> sending UDP datagrams out of the computer's Ethernet interface over
> >> the Internet, over LTE to the cell phone and back to the originating
> >> computer via USB tethering. This gives similar results to ICMP ping.
> >>
> >> I don't doubt that the carriers could implement reasonable AQM or
> >> even a smaller buffer at the head-end, or that the phone could
> >> implement AQM for the uplink. For that matter I'm not sure the details of
> the air interface (LTE vs.
> >> UMTS vs. 1xEV-DO) necessarily makes a difference here.
> >>
> >> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in
> >> Eastern Massachusetts, the carrier is willing to queue and hold on to
> >> packets for >1 second. Even a single long-running TCP download (>15
> >> megabytes) is enough to tickle this problem.
> >>
> >> In the CCR paper, even flows >1 megabyte were almost nonexistent,
> >> which may be part of how these findings are compatible.
> >>
> >> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> >> <ingemar.s.johansson@ericsson.com> wrote:
> >> > Hi
> >> >
> >> > Include Mark's original post (below) as it was scrubbed
> >> >
> >> > I don't have an data of bufferbloat for wireline access and the
> >> > fiber
> >> connection that I have at home shows little evidence of bufferbloat.
> >> >
> >> > Wireless access seems to be a different story though.
> >> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks"
> >> > by Jiang et al. I decided to make a few measurements of my own
> >> > (hope that the attached png is not removed)
> >> >
> >> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> >> with a 3G modem attached.
> >> > The throughput was computed from the wireshark logs and RTT was
> >> measured with ping (towards a webserver hosted by Akamai). The
> >> location is Luleå city centre, Sweden (fixed locations) and the
> >> measurement was made at lunchtime on Dec 6 2012 .
> >> >
> >> > During the measurement session I did some close to normal websurf,
> >> including watching embedded videoclips and youtube. In some cases the
> >> effects of bufferbloat was clearly noticeable.
> >> > Admit that this is just one sample, a more elaborate study with
> >> > more
> >> samples would be interesting to see.
> >> >
> >> > 3G has the interesting feature that packets are very seldom lost in
> >> downlink (data going to the terminal). I did not see a single packet
> >> loss in this test!. I wont elaborate on the reasons in this email.
> >> > I would however believe that LTE is better off in this respect as
> >> > long as
> >> AQM is implemented, mainly because LTE is a packet-switched
> architecture.
> >> >
> >> > /Ingemar
> >> >
> >> > Marks post.
> >> > ********
> >> > [I tried to post this in a couple places to ensure I hit folks who
> >> > would be interested. If you end up with multiple copies of the
> >> > email, my apologies. --allman]
> >> >
> >> > I know bufferbloat has been an interest of lots of folks recently.
> >> > So, I thought I'd flog a recent paper that presents a little data
> >> > on the topic ...
> >> >
> >> > Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
> >> > Communication Review, 43(1), January 2013.
> >> > http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >> >
> >> > Its an initial paper. I think more data would be great!
> >> >
> >> > allman
> >> >
> >> >
> >> > --
> >> > http://www.icir.org/mallman/
> >> >
> >> >
> >> >
> >> >
[-- Attachment #2: Type: text/html, Size: 14232 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 7:35 [Bloat] " Ingemar Johansson S
2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
@ 2013-01-08 15:04 ` dpreed
1 sibling, 0 replies; 11+ messages in thread
From: dpreed @ 2013-01-08 15:04 UTC (permalink / raw)
To: Ingemar Johansson S; +Cc: end2end-interest, bloat
[-- Attachment #1: Type: text/plain, Size: 3916 bytes --]
[This mail won't go to "end2end-interest" because I am blocked from posting there, but I leave the address on so that I don't narrow the "reply-to" list for those replying to me. I receive but can not send there.]
Looking at your graph, Ingemar, the problem is in the extreme cases, which are hardly rare. Note the scale is in *seconds* on RTT. This correlates with excess buffering creating stable, extremely long queues. I've been observing this for years on cellular networks - 3G, and now Verizon's deployment of LTE (data collection in process).
Regarding your lack of "experiencing it in wired" connections, I can only suggest this - perhaps you don't have any heavy load traffic sources competing for the bottleneck link.
To demonstrate the bad effects of bufferbloat, I'd suggest using the "rrul" test developed by toke@toke.dk . It simulates the "Daddy, the Internet is broken" scenario - a really heavy upload source, while measuring ping-time. I submit that the kinds of times I've seen on DOCSIS cable modems pretty consistently is close to a second latency on the uplink, even when the uplink is 2 Mb/sec or more.
The problem is that the latency due to bufferbloat is not "random" - it is "caused", and it *can* be fixed.
The first order fix is to bound the delay time through the bottleneck buffer to 20 msec. or less. On a high capacity wireless link, that's appropriate - more would only cause the endpoint TCP to open its window wider and wider.
-----Original Message-----
From: "Ingemar Johansson S" <ingemar.s.johansson@ericsson.com>
Sent: Tuesday, January 8, 2013 2:35am
To: "end2end-interest@postel.org" <end2end-interest@postel.org>, "bloat@lists.bufferbloat.net" <bloat@lists.bufferbloat.net>
Cc: "mallman@icir.org" <mallman@icir.org>
Subject: Re: [e2e] bufferbloat paper
Hi
Include Mark's original post (below) as it was scrubbed
I don't have an data of bufferbloat for wireline access and the fiber connection that I have at home shows little evidence of bufferbloat.
Wireless access seems to be a different story though.
After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang et al. I decided to make a few measurements of my own (hope that the attached png is not removed)
The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a 3G modem attached.
The throughput was computed from the wireshark logs and RTT was measured with ping (towards a webserver hosted by Akamai). The location is Luleå city centre, Sweden (fixed locations) and the measurement was made at lunchtime on Dec 6 2012 .
During the measurement session I did some close to normal websurf, including watching embedded videoclips and youtube. In some cases the effects of bufferbloat was clearly noticeable.
Admit that this is just one sample, a more elaborate study with more samples would be interesting to see.
3G has the interesting feature that packets are very seldom lost in downlink (data going to the terminal). I did not see a single packet loss in this test!. I wont elaborate on the reasons in this email.
I would however believe that LTE is better off in this respect as long as AQM is implemented, mainly because LTE is a packet-switched architecture.
/Ingemar
Marks post.
********
[I tried to post this in a couple places to ensure I hit folks who would
be interested. If you end up with multiple copies of the email, my
apologies. --allman]
I know bufferbloat has been an interest of lots of folks recently. So,
I thought I'd flog a recent paper that presents a little data on the
topic ...
Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
Communication Review, 43(1), January 2013.
http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
Its an initial paper. I think more data would be great!
allman
--
http://www.icir.org/mallman/
[-- Attachment #2: Type: text/html, Size: 4795 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 12:44 ` Keith Winstein
@ 2013-01-08 13:19 ` Ingemar Johansson S
2013-01-08 15:29 ` dpreed
0 siblings, 1 reply; 11+ messages in thread
From: Ingemar Johansson S @ 2013-01-08 13:19 UTC (permalink / raw)
To: Keith Winstein; +Cc: end2end-interest, bloat
OK...
Likely means that AQM is not turned on in the eNodeB, can't be 100% sure though but it seems so.
At least one company I know of offers AQM in eNodeB. However one problem seems to be that the only thing that counts is peak throughput, you have probably too seen these "up to X Mbps" slogans. Competition is fierce snd for this reason it could be tempting to turn off AQM as it may reduce peak throughput slightly. I know and most people on these mailing lists knows that peak throughput is the "mexapixels" of the internet, one need to address other aspects in the benchmarks.
/Ingemar
> -----Original Message-----
> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 13:44
> To: Ingemar Johansson S
> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> mallman@icir.org
> Subject: Re: [e2e] bufferbloat paper
>
> Hello Ingemar,
>
> Thanks for your feedback and your own graph.
>
> This is testing the LTE downlink, not the uplink. It was a TCP download.
>
> There was zero packet loss on the ICMP pings. I did not measure the TCP
> flow itself but I suspect packet loss was minimal if not also zero.
>
> Best,
> Keith
>
> On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S
> <ingemar.s.johansson@ericsson.com> wrote:
> > Hi
> >
> > Interesting graph, thanks for sharing it.
> > It is likely that the delay is only limited by TCPs maximum congestion
> window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s,
> giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at
> other time instants seems to give a similar figure.
> > Do you see any packet loss ?
> >
> > The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as
> the packets are buffered there.
> > The eNodeB does not buffer up packets in UL* so I would in this particular
> case argue that the problem is best solved in the terminal.
> > Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing
> that is standardized also I cannot tell how feasible it is.
> >
> > /Ingemar
> >
> > BTW... UL = uplink
> > * RLC-AM retransmissions can be said to cause delay in the eNodeB but
> then again the main problem is that packets are being queued up in the
> terminals sendbuffer. The MAC layer HARQ can too cause some delay but
> this is a necessity to get an optimal performance for LTE, moreover the
> added delay due to HARQ reTx is marginal in this context.
> >
> >> -----Original Message-----
> >> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of
> >> Keith Winstein
> >> Sent: den 8 januari 2013 11:42
> >> To: Ingemar Johansson S
> >> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> >> mallman@icir.org
> >> Subject: Re: [e2e] bufferbloat paper
> >>
> >> I'm sorry to report that the problem is not (in practice) better on
> >> LTE, even though the standard may support features that could be used
> >> to mitigate the problem.
> >>
> >> Here is a plot (also at
> >> http://web.mit.edu/keithw/www/verizondown.png)
> >> from a computer tethered to a Samsung Galaxy Nexus running Android
> >> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
> >>
> >> The phone was stationary during the test and had four bars (a full
> >> signal) of "4G" service. The computer ran a single full-throttle TCP
> >> CUBIC download from one well-connected but unremarkable Linux host
> >> (ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the
> >> same tethered LTE interface. There were zero lost pings during the
> >> entire test
> >> (606/606 delivered).
> >>
> >> The RTT grows to 1-2 seconds and stays stable in that region for most
> >> of the test, except for one 12-second period of >5 seconds RTT. We
> >> have also tried measuring only "one-way delay" (instead of RTT) by
> >> sending UDP datagrams out of the computer's Ethernet interface over
> >> the Internet, over LTE to the cell phone and back to the originating
> >> computer via USB tethering. This gives similar results to ICMP ping.
> >>
> >> I don't doubt that the carriers could implement reasonable AQM or
> >> even a smaller buffer at the head-end, or that the phone could
> >> implement AQM for the uplink. For that matter I'm not sure the details of
> the air interface (LTE vs.
> >> UMTS vs. 1xEV-DO) necessarily makes a difference here.
> >>
> >> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in
> >> Eastern Massachusetts, the carrier is willing to queue and hold on to
> >> packets for >1 second. Even a single long-running TCP download (>15
> >> megabytes) is enough to tickle this problem.
> >>
> >> In the CCR paper, even flows >1 megabyte were almost nonexistent,
> >> which may be part of how these findings are compatible.
> >>
> >> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> >> <ingemar.s.johansson@ericsson.com> wrote:
> >> > Hi
> >> >
> >> > Include Mark's original post (below) as it was scrubbed
> >> >
> >> > I don't have an data of bufferbloat for wireline access and the
> >> > fiber
> >> connection that I have at home shows little evidence of bufferbloat.
> >> >
> >> > Wireless access seems to be a different story though.
> >> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks"
> >> > by Jiang et al. I decided to make a few measurements of my own
> >> > (hope that the attached png is not removed)
> >> >
> >> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> >> with a 3G modem attached.
> >> > The throughput was computed from the wireshark logs and RTT was
> >> measured with ping (towards a webserver hosted by Akamai). The
> >> location is Luleå city centre, Sweden (fixed locations) and the
> >> measurement was made at lunchtime on Dec 6 2012 .
> >> >
> >> > During the measurement session I did some close to normal websurf,
> >> including watching embedded videoclips and youtube. In some cases the
> >> effects of bufferbloat was clearly noticeable.
> >> > Admit that this is just one sample, a more elaborate study with
> >> > more
> >> samples would be interesting to see.
> >> >
> >> > 3G has the interesting feature that packets are very seldom lost in
> >> downlink (data going to the terminal). I did not see a single packet
> >> loss in this test!. I wont elaborate on the reasons in this email.
> >> > I would however believe that LTE is better off in this respect as
> >> > long as
> >> AQM is implemented, mainly because LTE is a packet-switched
> architecture.
> >> >
> >> > /Ingemar
> >> >
> >> > Marks post.
> >> > ********
> >> > [I tried to post this in a couple places to ensure I hit folks who
> >> > would be interested. If you end up with multiple copies of the
> >> > email, my apologies. --allman]
> >> >
> >> > I know bufferbloat has been an interest of lots of folks recently.
> >> > So, I thought I'd flog a recent paper that presents a little data
> >> > on the topic ...
> >> >
> >> > Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
> >> > Communication Review, 43(1), January 2013.
> >> > http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >> >
> >> > Its an initial paper. I think more data would be great!
> >> >
> >> > allman
> >> >
> >> >
> >> > --
> >> > http://www.icir.org/mallman/
> >> >
> >> >
> >> >
> >> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 12:19 ` Ingemar Johansson S
@ 2013-01-08 12:44 ` Keith Winstein
2013-01-08 13:19 ` Ingemar Johansson S
0 siblings, 1 reply; 11+ messages in thread
From: Keith Winstein @ 2013-01-08 12:44 UTC (permalink / raw)
To: Ingemar Johansson S; +Cc: end2end-interest, bloat
Hello Ingemar,
Thanks for your feedback and your own graph.
This is testing the LTE downlink, not the uplink. It was a TCP download.
There was zero packet loss on the ICMP pings. I did not measure the
TCP flow itself but I suspect packet loss was minimal if not also
zero.
Best,
Keith
On Tue, Jan 8, 2013 at 7:19 AM, Ingemar Johansson S
<ingemar.s.johansson@ericsson.com> wrote:
> Hi
>
> Interesting graph, thanks for sharing it.
> It is likely that the delay is only limited by TCPs maximum congestion window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s, giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at other time instants seems to give a similar figure.
> Do you see any packet loss ?
>
> The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as the packets are buffered there.
> The eNodeB does not buffer up packets in UL* so I would in this particular case argue that the problem is best solved in the terminal.
> Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing that is standardized also I cannot tell how feasible it is.
>
> /Ingemar
>
> BTW... UL = uplink
> * RLC-AM retransmissions can be said to cause delay in the eNodeB but then again the main problem is that packets are being queued up in the terminals sendbuffer. The MAC layer HARQ can too cause some delay but this is a necessity to get an optimal performance for LTE, moreover the added delay due to HARQ reTx is marginal in this context.
>
>> -----Original Message-----
>> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
>> Winstein
>> Sent: den 8 januari 2013 11:42
>> To: Ingemar Johansson S
>> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
>> mallman@icir.org
>> Subject: Re: [e2e] bufferbloat paper
>>
>> I'm sorry to report that the problem is not (in practice) better on LTE, even
>> though the standard may support features that could be used to mitigate the
>> problem.
>>
>> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
>> from a computer tethered to a Samsung Galaxy Nexus running Android
>> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
>>
>> The phone was stationary during the test and had four bars (a full
>> signal) of "4G" service. The computer ran a single full-throttle TCP CUBIC
>> download from one well-connected but unremarkable Linux host (ssh
>> hostname 'cat /dev/urandom') while pinging at 4 Hz across the same
>> tethered LTE interface. There were zero lost pings during the entire test
>> (606/606 delivered).
>>
>> The RTT grows to 1-2 seconds and stays stable in that region for most of the
>> test, except for one 12-second period of >5 seconds RTT. We have also tried
>> measuring only "one-way delay" (instead of RTT) by sending UDP datagrams
>> out of the computer's Ethernet interface over the Internet, over LTE to the
>> cell phone and back to the originating computer via USB tethering. This gives
>> similar results to ICMP ping.
>>
>> I don't doubt that the carriers could implement reasonable AQM or even a
>> smaller buffer at the head-end, or that the phone could implement AQM for
>> the uplink. For that matter I'm not sure the details of the air interface (LTE vs.
>> UMTS vs. 1xEV-DO) necessarily makes a difference here.
>>
>> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in Eastern
>> Massachusetts, the carrier is willing to queue and hold on to packets for >1
>> second. Even a single long-running TCP download (>15
>> megabytes) is enough to tickle this problem.
>>
>> In the CCR paper, even flows >1 megabyte were almost nonexistent, which
>> may be part of how these findings are compatible.
>>
>> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
>> <ingemar.s.johansson@ericsson.com> wrote:
>> > Hi
>> >
>> > Include Mark's original post (below) as it was scrubbed
>> >
>> > I don't have an data of bufferbloat for wireline access and the fiber
>> connection that I have at home shows little evidence of bufferbloat.
>> >
>> > Wireless access seems to be a different story though.
>> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by
>> > Jiang et al. I decided to make a few measurements of my own (hope that
>> > the attached png is not removed)
>> >
>> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
>> with a 3G modem attached.
>> > The throughput was computed from the wireshark logs and RTT was
>> measured with ping (towards a webserver hosted by Akamai). The location is
>> Luleå city centre, Sweden (fixed locations) and the measurement was made
>> at lunchtime on Dec 6 2012 .
>> >
>> > During the measurement session I did some close to normal websurf,
>> including watching embedded videoclips and youtube. In some cases the
>> effects of bufferbloat was clearly noticeable.
>> > Admit that this is just one sample, a more elaborate study with more
>> samples would be interesting to see.
>> >
>> > 3G has the interesting feature that packets are very seldom lost in
>> downlink (data going to the terminal). I did not see a single packet loss in this
>> test!. I wont elaborate on the reasons in this email.
>> > I would however believe that LTE is better off in this respect as long as
>> AQM is implemented, mainly because LTE is a packet-switched architecture.
>> >
>> > /Ingemar
>> >
>> > Marks post.
>> > ********
>> > [I tried to post this in a couple places to ensure I hit folks who
>> > would be interested. If you end up with multiple copies of the
>> > email, my apologies. --allman]
>> >
>> > I know bufferbloat has been an interest of lots of folks recently.
>> > So, I thought I'd flog a recent paper that presents a little data on
>> > the topic ...
>> >
>> > Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
>> > Communication Review, 43(1), January 2013.
>> > http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>> >
>> > Its an initial paper. I think more data would be great!
>> >
>> > allman
>> >
>> >
>> > --
>> > http://www.icir.org/mallman/
>> >
>> >
>> >
>> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
@ 2013-01-08 12:19 ` Ingemar Johansson S
2013-01-08 12:44 ` Keith Winstein
2013-01-09 14:07 ` Michael Richardson
1 sibling, 1 reply; 11+ messages in thread
From: Ingemar Johansson S @ 2013-01-08 12:19 UTC (permalink / raw)
To: Keith Winstein; +Cc: mallman, end2end-interest, bloat
Hi
Interesting graph, thanks for sharing it.
It is likely that the delay is only limited by TCPs maximum congestion window, for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s, giving a congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at other time instants seems to give a similar figure.
Do you see any packet loss ?
The easiest way to mitigate bufferbloat in LTE UL is AQM in the terminal as the packets are buffered there.
The eNodeB does not buffer up packets in UL* so I would in this particular case argue that the problem is best solved in the terminal.
Implementing AQM for UL in eNodeB is probably doable but AFAIK nothing that is standardized also I cannot tell how feasible it is.
/Ingemar
BTW... UL = uplink
* RLC-AM retransmissions can be said to cause delay in the eNodeB but then again the main problem is that packets are being queued up in the terminals sendbuffer. The MAC layer HARQ can too cause some delay but this is a necessity to get an optimal performance for LTE, moreover the added delay due to HARQ reTx is marginal in this context.
> -----Original Message-----
> From: winstein@gmail.com [mailto:winstein@gmail.com] On Behalf Of Keith
> Winstein
> Sent: den 8 januari 2013 11:42
> To: Ingemar Johansson S
> Cc: end2end-interest@postel.org; bloat@lists.bufferbloat.net;
> mallman@icir.org
> Subject: Re: [e2e] bufferbloat paper
>
> I'm sorry to report that the problem is not (in practice) better on LTE, even
> though the standard may support features that could be used to mitigate the
> problem.
>
> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
> from a computer tethered to a Samsung Galaxy Nexus running Android
> 4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
>
> The phone was stationary during the test and had four bars (a full
> signal) of "4G" service. The computer ran a single full-throttle TCP CUBIC
> download from one well-connected but unremarkable Linux host (ssh
> hostname 'cat /dev/urandom') while pinging at 4 Hz across the same
> tethered LTE interface. There were zero lost pings during the entire test
> (606/606 delivered).
>
> The RTT grows to 1-2 seconds and stays stable in that region for most of the
> test, except for one 12-second period of >5 seconds RTT. We have also tried
> measuring only "one-way delay" (instead of RTT) by sending UDP datagrams
> out of the computer's Ethernet interface over the Internet, over LTE to the
> cell phone and back to the originating computer via USB tethering. This gives
> similar results to ICMP ping.
>
> I don't doubt that the carriers could implement reasonable AQM or even a
> smaller buffer at the head-end, or that the phone could implement AQM for
> the uplink. For that matter I'm not sure the details of the air interface (LTE vs.
> UMTS vs. 1xEV-DO) necessarily makes a difference here.
>
> But at present, at least with AT&T, Verizon, Sprint and T-Mobile in Eastern
> Massachusetts, the carrier is willing to queue and hold on to packets for >1
> second. Even a single long-running TCP download (>15
> megabytes) is enough to tickle this problem.
>
> In the CCR paper, even flows >1 megabyte were almost nonexistent, which
> may be part of how these findings are compatible.
>
> On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
> <ingemar.s.johansson@ericsson.com> wrote:
> > Hi
> >
> > Include Mark's original post (below) as it was scrubbed
> >
> > I don't have an data of bufferbloat for wireline access and the fiber
> connection that I have at home shows little evidence of bufferbloat.
> >
> > Wireless access seems to be a different story though.
> > After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by
> > Jiang et al. I decided to make a few measurements of my own (hope that
> > the attached png is not removed)
> >
> > The measurement setup was quite simple, a Laptop with Ubuntu 12.04
> with a 3G modem attached.
> > The throughput was computed from the wireshark logs and RTT was
> measured with ping (towards a webserver hosted by Akamai). The location is
> Luleå city centre, Sweden (fixed locations) and the measurement was made
> at lunchtime on Dec 6 2012 .
> >
> > During the measurement session I did some close to normal websurf,
> including watching embedded videoclips and youtube. In some cases the
> effects of bufferbloat was clearly noticeable.
> > Admit that this is just one sample, a more elaborate study with more
> samples would be interesting to see.
> >
> > 3G has the interesting feature that packets are very seldom lost in
> downlink (data going to the terminal). I did not see a single packet loss in this
> test!. I wont elaborate on the reasons in this email.
> > I would however believe that LTE is better off in this respect as long as
> AQM is implemented, mainly because LTE is a packet-switched architecture.
> >
> > /Ingemar
> >
> > Marks post.
> > ********
> > [I tried to post this in a couple places to ensure I hit folks who
> > would be interested. If you end up with multiple copies of the
> > email, my apologies. --allman]
> >
> > I know bufferbloat has been an interest of lots of folks recently.
> > So, I thought I'd flog a recent paper that presents a little data on
> > the topic ...
> >
> > Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
> > Communication Review, 43(1), January 2013.
> > http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
> >
> > Its an initial paper. I think more data would be great!
> >
> > allman
> >
> >
> > --
> > http://www.icir.org/mallman/
> >
> >
> >
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Bloat] [e2e] bufferbloat paper
2013-01-08 7:35 [Bloat] " Ingemar Johansson S
@ 2013-01-08 10:42 ` Keith Winstein
2013-01-08 12:19 ` Ingemar Johansson S
2013-01-09 14:07 ` Michael Richardson
2013-01-08 15:04 ` dpreed
1 sibling, 2 replies; 11+ messages in thread
From: Keith Winstein @ 2013-01-08 10:42 UTC (permalink / raw)
To: Ingemar Johansson S; +Cc: mallman, end2end-interest, bloat
[-- Attachment #1: Type: text/plain, Size: 4093 bytes --]
I'm sorry to report that the problem is not (in practice) better on
LTE, even though the standard may support features that could be used
to mitigate the problem.
Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
from a computer tethered to a Samsung Galaxy Nexus running Android
4.0.4 on Verizon LTE service, taken just now in Cambridge, Mass.
The phone was stationary during the test and had four bars (a full
signal) of "4G" service. The computer ran a single full-throttle TCP
CUBIC download from one well-connected but unremarkable Linux host
(ssh hostname 'cat /dev/urandom') while pinging at 4 Hz across the
same tethered LTE interface. There were zero lost pings during the
entire test (606/606 delivered).
The RTT grows to 1-2 seconds and stays stable in that region for most
of the test, except for one 12-second period of >5 seconds RTT. We
have also tried measuring only "one-way delay" (instead of RTT) by
sending UDP datagrams out of the computer's Ethernet interface over
the Internet, over LTE to the cell phone and back to the originating
computer via USB tethering. This gives similar results to ICMP ping.
I don't doubt that the carriers could implement reasonable AQM or even
a smaller buffer at the head-end, or that the phone could implement
AQM for the uplink. For that matter I'm not sure the details of the
air interface (LTE vs. UMTS vs. 1xEV-DO) necessarily makes a
difference here.
But at present, at least with AT&T, Verizon, Sprint and T-Mobile in
Eastern Massachusetts, the carrier is willing to queue and hold on to
packets for >1 second. Even a single long-running TCP download (>15
megabytes) is enough to tickle this problem.
In the CCR paper, even flows >1 megabyte were almost nonexistent,
which may be part of how these findings are compatible.
On Tue, Jan 8, 2013 at 2:35 AM, Ingemar Johansson S
<ingemar.s.johansson@ericsson.com> wrote:
> Hi
>
> Include Mark's original post (below) as it was scrubbed
>
> I don't have an data of bufferbloat for wireline access and the fiber connection that I have at home shows little evidence of bufferbloat.
>
> Wireless access seems to be a different story though.
> After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang et al. I decided to make a few measurements of my own (hope that the attached png is not removed)
>
> The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a 3G modem attached.
> The throughput was computed from the wireshark logs and RTT was measured with ping (towards a webserver hosted by Akamai). The location is Luleå city centre, Sweden (fixed locations) and the measurement was made at lunchtime on Dec 6 2012 .
>
> During the measurement session I did some close to normal websurf, including watching embedded videoclips and youtube. In some cases the effects of bufferbloat was clearly noticeable.
> Admit that this is just one sample, a more elaborate study with more samples would be interesting to see.
>
> 3G has the interesting feature that packets are very seldom lost in downlink (data going to the terminal). I did not see a single packet loss in this test!. I wont elaborate on the reasons in this email.
> I would however believe that LTE is better off in this respect as long as AQM is implemented, mainly because LTE is a packet-switched architecture.
>
> /Ingemar
>
> Marks post.
> ********
> [I tried to post this in a couple places to ensure I hit folks who would
> be interested. If you end up with multiple copies of the email, my
> apologies. --allman]
>
> I know bufferbloat has been an interest of lots of folks recently. So,
> I thought I'd flog a recent paper that presents a little data on the
> topic ...
>
> Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
> Communication Review, 43(1), January 2013.
> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>
> Its an initial paper. I think more data would be great!
>
> allman
>
>
> --
> http://www.icir.org/mallman/
>
>
>
>
[-- Attachment #2: verizondown.png --]
[-- Type: image/png, Size: 17545 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2013-01-10 13:48 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-10 13:48 [Bloat] [e2e] bufferbloat paper dpreed
-- strict thread matches above, loose matches on Subject: below --
2013-01-08 7:35 [Bloat] " Ingemar Johansson S
2013-01-08 10:42 ` [Bloat] [e2e] " Keith Winstein
2013-01-08 12:19 ` Ingemar Johansson S
2013-01-08 12:44 ` Keith Winstein
2013-01-08 13:19 ` Ingemar Johansson S
2013-01-08 15:29 ` dpreed
2013-01-08 16:40 ` Mark Allman
2013-01-09 14:07 ` Michael Richardson
2013-01-10 7:37 ` Keith Winstein
2013-01-10 13:46 ` Michael Richardson
2013-01-08 15:04 ` dpreed
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox