* [Bloat] Bufferbloat Paper
@ 2013-01-07 23:37 Hagen Paul Pfeifer
2013-01-08 0:33 ` Dave Taht
` (2 more replies)
0 siblings, 3 replies; 33+ messages in thread
From: Hagen Paul Pfeifer @ 2013-01-07 23:37 UTC (permalink / raw)
To: bloat
FYI: "Comments on Bufferbloat" paper from Mark Allman
http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
Cheers, Hagen
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
@ 2013-01-08 0:33 ` Dave Taht
2013-01-08 0:40 ` David Lang
2013-01-08 2:04 ` Mark Watson
2013-01-08 1:54 ` Stephen Hemminger
2013-01-09 20:05 ` Michael Richardson
2 siblings, 2 replies; 33+ messages in thread
From: Dave Taht @ 2013-01-08 0:33 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: bloat
"We use a packet trace collection taken from the Case Con-
nection Zone (CCZ) [1] experimental fiber-to-the-home net-
work which connects roughly 90 homes adjacent to Case
Western Reserve University’s campus with **bi-directional 1 Gbps
links**. "
Aside from their dataset having absolutely no reflection on the
reality of the 99.999% of home users running at speeds two or three or
*more* orders of magnitude below that speed, it seems like a nice
paper.
On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>
> FYI: "Comments on Bufferbloat" paper from Mark Allman
>
>
> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>
>
> Cheers, Hagen
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 0:33 ` Dave Taht
@ 2013-01-08 0:40 ` David Lang
2013-01-08 2:04 ` Mark Watson
1 sibling, 0 replies; 33+ messages in thread
From: David Lang @ 2013-01-08 0:40 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1277 bytes --]
When your connections are that fast, there's very little buffering going on,
because your WAN is just as fast as your LAN.
Queuing takes place when the next hop has less bandwidth available than the
prior hop.
However, it would be interesting to see if someone coudl take the tools they
used, put them in a datacenter somewhere and analyse the results.
David Lang
On Mon, 7 Jan 2013, Dave Taht wrote:
> "We use a packet trace collection taken from the Case Con-
> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
> work which connects roughly 90 homes adjacent to Case
> Western Reserve University’s campus with **bi-directional 1 Gbps
> links**. "
>
> Aside from their dataset having absolutely no reflection on the
> reality of the 99.999% of home users running at speeds two or three or
> *more* orders of magnitude below that speed, it seems like a nice
> paper.
>
>
> On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>>
>> FYI: "Comments on Bufferbloat" paper from Mark Allman
>>
>>
>> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>>
>>
>> Cheers, Hagen
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 0:33 ` Dave Taht
2013-01-08 0:40 ` David Lang
@ 2013-01-08 2:04 ` Mark Watson
2013-01-08 2:24 ` David Lang
2013-01-08 4:52 ` Mark Watson
1 sibling, 2 replies; 33+ messages in thread
From: Mark Watson @ 2013-01-08 2:04 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Jan 7, 2013, at 4:33 PM, Dave Taht wrote:
> "We use a packet trace collection taken from the Case Con-
> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
> work which connects roughly 90 homes adjacent to Case
> Western Reserve University’s campus with **bi-directional 1 Gbps
> links**. "
>
> Aside from their dataset having absolutely no reflection on the
> reality of the 99.999% of home users running at speeds two or three or
> *more* orders of magnitude below that speed, it seems like a nice
> paper.
Actually they analyze the delay between the measurement point in CCZ and the *remote* peer, splitting out residential and non-residential peers. 57% of the peers are residential. Sounds like a lot of the traffic is p2p. You could argue that the remote, residential p2p peers are not on "typical" connections and that this traffic doesn't follow the time-of-day usage patterns expected for applications with a live human in front of them.
...Mark
>
>
> On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>>
>> FYI: "Comments on Bufferbloat" paper from Mark Allman
>>
>>
>> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>>
>>
>> Cheers, Hagen
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 2:04 ` Mark Watson
@ 2013-01-08 2:24 ` David Lang
2013-01-09 20:08 ` Michael Richardson
2013-01-08 4:52 ` Mark Watson
1 sibling, 1 reply; 33+ messages in thread
From: David Lang @ 2013-01-08 2:24 UTC (permalink / raw)
To: Mark Watson; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1410 bytes --]
On Tue, 8 Jan 2013, Mark Watson wrote:
> On Jan 7, 2013, at 4:33 PM, Dave Taht wrote:
>
>> "We use a packet trace collection taken from the Case Con-
>> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
>> work which connects roughly 90 homes adjacent to Case
>> Western Reserve University’s campus with **bi-directional 1 Gbps
>> links**. "
>>
>> Aside from their dataset having absolutely no reflection on the
>> reality of the 99.999% of home users running at speeds two or three or
>> *more* orders of magnitude below that speed, it seems like a nice
>> paper.
>
> Actually they analyze the delay between the measurement point in CCZ and the
> *remote* peer, splitting out residential and non-residential peers. 57% of the
> peers are residential. Sounds like a lot of the traffic is p2p. You could
> argue that the remote, residential p2p peers are not on "typical" connections
> and that this traffic doesn't follow the time-of-day usage patterns expected
> for applications with a live human in front of them.
But if the "remote peer" is on a 1Gbps link, that hardly reflects normal
conditions.
typical conditions are
1G 1M
desktop -----firewall ---- Internet
it's this transition from 1G to 1M that causes data to be buffered. If you have
1G on both sides of the home firewall, then it's unlikely that very much data is
going to be buffered there.
David Lang
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 2:04 ` Mark Watson
2013-01-08 2:24 ` David Lang
@ 2013-01-08 4:52 ` Mark Watson
1 sibling, 0 replies; 33+ messages in thread
From: Mark Watson @ 2013-01-08 4:52 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Jan 7, 2013, at 4:33 PM, Dave Taht wrote:
> "We use a packet trace collection taken from the Case Con-
> nection Zone (CCZ) [1] experimental fiber-to-the-home net-
> work which connects roughly 90 homes adjacent to Case
> Western Reserve University’s campus with **bi-directional 1 Gbps
> links**. "
>
> Aside from their dataset having absolutely no reflection on the
> reality of the 99.999% of home users running at speeds two or three or
> *more* orders of magnitude below that speed, it seems like a nice
> paper.
Actually they analyze the delay between the measurement point in CCZ and the *remote* peer, splitting out residential and non-residential peers. 57% of the peers are residential. Sounds like a lot of the traffic is p2p. You could argue that the remote, residential p2p peers are not on "typical" connections and that this traffic doesn't follow the time-of-day usage patterns expected for applications with a live human in front of them.
...Mark
>
>
> On Mon, Jan 7, 2013 at 3:37 PM, Hagen Paul Pfeifer <hagen@jauu.net> wrote:
>>
>> FYI: "Comments on Bufferbloat" paper from Mark Allman
>>
>>
>> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>>
>>
>> Cheers, Hagen
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
2013-01-08 0:33 ` Dave Taht
@ 2013-01-08 1:54 ` Stephen Hemminger
2013-01-08 2:15 ` Oliver Hohlfeld
` (2 more replies)
2013-01-09 20:05 ` Michael Richardson
2 siblings, 3 replies; 33+ messages in thread
From: Stephen Hemminger @ 2013-01-08 1:54 UTC (permalink / raw)
To: Hagen Paul Pfeifer; +Cc: bloat
The tone of the paper is a bit of "if academics don't analyze it to death
it must not exist". The facts are interesting, but the interpretation ignores
the human element. If human's perceive delay "Daddy the Internet is slow", then
they will change their behavior to avoid the problem: "it hurts when I download,
so I will do it later".
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 1:54 ` Stephen Hemminger
@ 2013-01-08 2:15 ` Oliver Hohlfeld
2013-01-08 12:44 ` Toke Høiland-Jørgensen
2013-01-08 17:22 ` Dave Taht
2 siblings, 0 replies; 33+ messages in thread
From: Oliver Hohlfeld @ 2013-01-08 2:15 UTC (permalink / raw)
To: bloat
On Mon, Jan 07, 2013 at 05:54:17PM -0800, Stephen Hemminger wrote:
> The tone of the paper is a bit of "if academics don't analyze it to death
> it must not exist".
This does not reflect statements made in the paper; The paper
does acknowledge the /existence/ of the problem.
What the paper discusses is the frequency / extend of the problem.
Using data representing residential users in multiple countries,
I can basically confirm the papers statement that high rtts are not
widely observed. The causes for high rtts are multifold and include
more than just bufferbloat. My data also suggests that it is
a problem that does not frequently occur. One reason being that
users do not often utilize their uplink.
> The facts are interesting, but the interpretation ignores
> the human element.
Indeed.
> If human's perceive delay "Daddy the Internet is slow", then
> they will change their behavior to avoid the problem: "it hurts when I download,
> so I will do it later".
Speculative, but one interpretation. Chances that downloads hurt
are small.
Oliver
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 1:54 ` Stephen Hemminger
2013-01-08 2:15 ` Oliver Hohlfeld
@ 2013-01-08 12:44 ` Toke Høiland-Jørgensen
2013-01-08 13:55 ` Mark Allman
2013-01-08 14:04 ` Mark Allman
2013-01-08 17:22 ` Dave Taht
2 siblings, 2 replies; 33+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-01-08 12:44 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 1536 bytes --]
Stephen Hemminger <shemminger@vyatta.com>
writes:
> The tone of the paper is a bit of "if academics don't analyze it to
> death it must not exist". The facts are interesting, but the
> interpretation ignores the human element. If human's perceive delay
> "Daddy the Internet is slow", then they will change their behavior to
> avoid the problem: "it hurts when I download, so I will do it later".
Well severe latency spikes caused by bufferbloat are relatively
transient in nature. If connections were constantly severely bloated the
internet would be unusable and the problem would probably (hopefully?)
have been spotted and fixed long ago. As far as I can tell from their
graphs, ~5% of connections to "residential" hosts exhibit added delays
of >=400 milliseconds, a delay that is certainly noticeable and would
make interactive applications (gaming, voip etc) pretty much unusable.
Now, I may be jumping to conclusions here, but I couldn't find anything
about how their samples were distributed. However, assuming the worst,
if these are 5% of all connections to all peers, each peer will have a
latency spike of at least 400 milliseconds for one second every 20
seconds (on average). Which is certainly enough to make a phone call
choppy, or get you killed in a fast-paced FPS.
It would be interesting if a large-scale test like this could flush out
how big a percentage of hosts do occasionally experience bufferbloat,
and how many never do.
-Toke
--
Toke Høiland-Jørgensen
toke@toke.dk
[-- Attachment #2: Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 12:44 ` Toke Høiland-Jørgensen
@ 2013-01-08 13:55 ` Mark Allman
2013-01-09 0:03 ` David Lang
2013-01-09 20:14 ` Michael Richardson
2013-01-08 14:04 ` Mark Allman
1 sibling, 2 replies; 33+ messages in thread
From: Mark Allman @ 2013-01-08 13:55 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 2703 bytes --]
Let me make a few general comments here ...
(0) The goal is to bring *some* *data* to the conversation. To
understand the size and scope of bufferbloat problem it seems to me
we need data.
(1) My goal is to make some observations of the queuing (/delay
variation) in the non-FTTH portion of the network path. As folks
have pointed out, its unlikely bufferbloat is much of a problem in
the 1Gbps portion of the network I monitor.
(2) The network I am monitoring looks like this ...
LEH -> IHR -> SW -> Internet -> REH
where, "LEH" is the local end host and "IHR" is the in-home router
provided by the FTTH project. The connection between the LEH and
the IHR can either be wired (at up to 1Gbps) or wireless (at much
less than 1Gbps, but I forget the actual wireless technology used on
the IHR). The IHRs are all run into a switch (SW) at 1Gbps. The
switch connects to the Internet via a 1Gbps link (so, this is a
theoretical bottleneck right here ...). The "REH" is the remote end
host. We monitor via mirroring on SW.
The delay we measure is from SW to REH and back. So, the fact that
this is a 1Gbps environment for local users is really not material.
The REHs are whatever the local users decide to talk to. I have no
idea what the edge bandwidth on the remote side is, but I presume it
is general not a Gbps (especially for the residential set).
So, if you wrote off the paper after the sentence that noted the
data was collected within an FTTH project, I'd invite you to read
further.
(3) This data is not ideal. Ideally I'd like to directly measure queues
in a bazillion places. That'd be fabulous. But, I am working with
what I have. I have traces that offer windows into the actual queue
occupancy when the local users I monitor engage particular remote
endpoints. Is this representative of the delays I'd find when the
local users are not engaging the remote end system? I have no
idea. I'd certainly like to know. But, the data doesn't tell me.
I am reporting what I have. It is something. And, it is more than
I have seen reported anywhere else. Folks should go collect more
data.
(And, note, this is not a knock on the folks---some of them my
colleagues---who have quite soundly assessed potential queue sizes
by trying to jam as much into the queue as possible and measuring
the worst case delays. That is well and good. It establishes a
bound and that there is the potential for problems. But, it does
not speak to what queue occupancy actually looks like. This latter
is what I am after.)
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 13:55 ` Mark Allman
@ 2013-01-09 0:03 ` David Lang
2013-01-10 13:01 ` Mark Allman
2013-01-09 20:14 ` Michael Richardson
1 sibling, 1 reply; 33+ messages in thread
From: David Lang @ 2013-01-09 0:03 UTC (permalink / raw)
To: bloat
On Tue, 08 Jan 2013 08:55:10 -0500, Mark Allman wrote:
> Let me make a few general comments here ...
>
> (0) The goal is to bring *some* *data* to the conversation. To
> understand the size and scope of bufferbloat problem it seems to
> me
> we need data.
no disagreement here.
> (1) My goal is to make some observations of the queuing (/delay
> variation) in the non-FTTH portion of the network path. As folks
> have pointed out, its unlikely bufferbloat is much of a problem
> in
> the 1Gbps portion of the network I monitor.
>
> (2) The network I am monitoring looks like this ...
>
> LEH -> IHR -> SW -> Internet -> REH
>
> where, "LEH" is the local end host and "IHR" is the in-home
> router
> provided by the FTTH project. The connection between the LEH and
> the IHR can either be wired (at up to 1Gbps) or wireless (at much
> less than 1Gbps, but I forget the actual wireless technology used
> on
> the IHR). The IHRs are all run into a switch (SW) at 1Gbps. The
> switch connects to the Internet via a 1Gbps link (so, this is a
> theoretical bottleneck right here ...). The "REH" is the remote
> end
> host. We monitor via mirroring on SW.
>
> The delay we measure is from SW to REH and back. So, the fact
> that
> this is a 1Gbps environment for local users is really not
> material.
> The REHs are whatever the local users decide to talk to. I have
> no
> idea what the edge bandwidth on the remote side is, but I presume
> it
> is general not a Gbps (especially for the residential set).
>
> So, if you wrote off the paper after the sentence that noted the
> data was collected within an FTTH project, I'd invite you to read
> further.
The issue is that if the home user has a 1G uplink to you, and then you
hae a 1G uplink to the Internet, there is not going to be very much if
any congestion in place. The only place where you are going to have any
buffering is in your 1G uplink to the Internet (and only if there is
enough traffic to cause congestion here)
In the 'normal' residential situation, the LEH -> THR connection is
probably 1G if wired, but the THR -> SW connection is likely to be <1M.
Therefor the THR ends up buffering the outbound traffic.
> (3) This data is not ideal. Ideally I'd like to directly measure
> queues
> in a bazillion places. That'd be fabulous. But, I am working
> with
> what I have. I have traces that offer windows into the actual
> queue
> occupancy when the local users I monitor engage particular remote
> endpoints. Is this representative of the delays I'd find when
> the
> local users are not engaging the remote end system? I have no
> idea. I'd certainly like to know. But, the data doesn't tell
> me.
> I am reporting what I have. It is something. And, it is more
> than
> I have seen reported anywhere else. Folks should go collect more
> data.
>
> (And, note, this is not a knock on the folks---some of them my
> colleagues---who have quite soundly assessed potential queue
> sizes
> by trying to jam as much into the queue as possible and measuring
> the worst case delays. That is well and good. It establishes a
> bound and that there is the potential for problems. But, it does
> not speak to what queue occupancy actually looks like. This
> latter
> is what I am after.)
The biggest problem I had with the paper was that it seemed to be
taking the tone "we measured and didn't find anything in this network,
so bufferbloat is not a real problem"
It may not be a problem in your network, but your network is very
unusual due to the high speed links to the end-users.
Even there, the 400ms delays that you found could be indications of the
problem (how bad their impact is is hard to say. If 5% of the packets
have 400ms latency, that would seem to me to be rather significant. It's
not the collapse that other people have been reporting, but given your
high bandwidth, I wouldn't expect to see that sort of collapse take
place.
David Lang
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-09 0:03 ` David Lang
@ 2013-01-10 13:01 ` Mark Allman
0 siblings, 0 replies; 33+ messages in thread
From: Mark Allman @ 2013-01-10 13:01 UTC (permalink / raw)
To: David Lang; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4648 bytes --]
> > (2) The network I am monitoring looks like this ...
> >
> > LEH -> IHR -> SW -> Internet -> REH
> >
> > where, "LEH" is the local end host and "IHR" is the in-home
> > router provided by the FTTH project. The connection between the
> > LEH and the IHR can either be wired (at up to 1Gbps) or wireless
> > (at much less than 1Gbps, but I forget the actual wireless
> > technology used on the IHR). The IHRs are all run into a switch
> > (SW) at 1Gbps. The switch connects to the Internet via a 1Gbps
> > link (so, this is a theoretical bottleneck right here ...). The
> > "REH" is the remote end host. We monitor via mirroring on SW.
> >
> > The delay we measure is from SW to REH and back.
>
> The issue is that if the home user has a 1G uplink to you, and then
> you hae a 1G uplink to the Internet, there is not going to be very
> much if any congestion in place. The only place where you are going
> to have any buffering is in your 1G uplink to the Internet (and only
> if there is enough traffic to cause congestion here)
>
> In the 'normal' residential situation, the LEH -> THR connection is
> probably 1G if wired, but the THR -> SW connection is likely to be
> <1M. Therefor the THR ends up buffering the outbound traffic.
(I assume 'THR' is what I called 'IHR'.)
You are too focused on the local side of the network that produced the
traffic and you are not understanding what was actually measured. As I
say above, the delays are measured from SW to REH and back. That *does
not* involve the LEH or the IHR in any way. Look at the picture and
think about it for a minute. Read my email. Read email from others'
that has also tried to clarify the issue.
Let me try one more time to be as absolutely explicit as I can be. Say,
...
- LEH sends a data packet D that is destined for REH at time t0.
- D is forwarded by IHR at time t1.
- D is both forwarded by SW and recorded in my packet trace at time
t2.
- D traverses the wide area Internet and arrives at REH (which is
whatever LEH happens to be talking to; not something I control or
can monitor) at time t3.
- At time t4 the REH will transmit an ACK A for data packet D.
- A will go back across the wide-area Internet and eventually hit SW
at time t5. A will be both forwarded to IHR and recorded in my
packet trace at this time.
- A will be forwarded by IHR at time t6.
- A will arrive at LEH at time t7.
The RTT sample I will take from this exchange is t5-t2. Your discussion
of focuses on t7-t5 (the downlink) and t2-t0 (the uplink). In other
words, you are talking about something different from what is presented
in the paper. If you want to comment on the paper, that is fine. But,
you should comment on what the paper says or what the paper is lacking
and not try to distort what the paper presents.
I fully understand that these FTTH links to the Internet are abnormal.
And, as such, if I were looking for buffers on the FTTH side of things
that'd be bogus. But, I am not doing that. Regardless of how many
times you say it. The measurements are not of 90 homes, but of 118K
remote peers that the 90 homes happened to communicate with (and the
networks used to reach those 118K peers). If it helps, you can think of
the 90 homes as just 90 homes connected to the Internet by whatever
technology (DSL, cable, fiber, wireless ...). The measurements are not
concerned with the networks inside of connecting these 90 homes.
Look, I could complain about my own paper all day long and twice as much
on Sunday. There are plenty of ways it is lacking. Others could no
doubt do the same. I have tried hard to use the right perspective and
to say what this data does and does not show. I have done that on this
list and in the paper itself. E.g., these are the first two bullets in
the Future Work section:
\item Bringing datasets from additional vantage points to bear on
the questions surrounding bufferbloat is unquestionably useful.
While we study bufferbloat related to 118K peers for some modest
period of time (up to one week), following more peers and over the
course of a longer period would be useful.
\item While we are able to assess 118K peers, we are only able to do
so opportunistically when a host on the network we monitor
communicates with those peers. A vantage point that provides a
more comprehensive view of residential peers' behavior would be
useful.
So, complain away if you'd like. I don't mind at all. But, at least
complain about what is in the paper and what is actually measured.
Please.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 13:55 ` Mark Allman
2013-01-09 0:03 ` David Lang
@ 2013-01-09 20:14 ` Michael Richardson
2013-01-09 20:19 ` Mark Allman
1 sibling, 1 reply; 33+ messages in thread
From: Michael Richardson @ 2013-01-09 20:14 UTC (permalink / raw)
To: mallman; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1658 bytes --]
>>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
Mark> less than 1Gbps, but I forget the actual wireless technology used on
Mark> the IHR). The IHRs are all run into a switch (SW) at 1Gbps. The
Mark> switch connects to the Internet via a 1Gbps link (so, this is a
Mark> theoretical bottleneck right here ...). The "REH" is the remote end
Mark> host. We monitor via mirroring on SW.
1) do you max out your 1Gb/s uplink at all?
2) have you investigated bufferbloat on that port of the switch?
(and do you have congestion issues on your mirror port?
I guess that the point of the loss analysis...)
Mark> (3) This data is not ideal. Ideally I'd like to directly
Mark> measure queues
Mark> in a bazillion places. That'd be fabulous. But, I am working with
Mark> what I have. I have traces that offer windows into the actual queue
Mark> occupancy when the local users I monitor engage particular remote
Mark> endpoints. Is this representative of the delays I'd find when the
Mark> local users are not engaging the remote end system? I have no
Mark> idea. I'd certainly like to know. But, the data doesn't tell me.
Mark> I am reporting what I have. It is something. And, it is more than
Mark> I have seen reported anywhere else. Folks should go collect more
Mark> data.
Thank you for this.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: Type: application/pgp-signature, Size: 307 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-09 20:14 ` Michael Richardson
@ 2013-01-09 20:19 ` Mark Allman
2013-01-09 20:31 ` Michael Richardson
0 siblings, 1 reply; 33+ messages in thread
From: Mark Allman @ 2013-01-09 20:19 UTC (permalink / raw)
To: Michael Richardson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 997 bytes --]
> >>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
> Mark> less than 1Gbps, but I forget the actual wireless technology
> Mark> used on
> Mark> the IHR). The IHRs are all run into a switch (SW) at 1Gbps. The
> Mark> switch connects to the Internet via a 1Gbps link (so, this is a
> Mark> theoretical bottleneck right here ...). The "REH" is the
> Mark> remote end
> Mark> host. We monitor via mirroring on SW.
>
> 1) do you max out your 1Gb/s uplink at all?
No. I do not believe we have seen peaks anywhere close to 1Gbps.
> 2) have you investigated bufferbloat on that port of the switch?
> (and do you have congestion issues on your mirror port?
> I guess that the point of the loss analysis...)
Correct - that is the point of the measurement loss analysis. We
believe we are losing very few packets during the measurement process.
Therefore, we believe the traces to be a faithful representation of what
has happened on-the-wire.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-09 20:19 ` Mark Allman
@ 2013-01-09 20:31 ` Michael Richardson
2013-01-10 18:05 ` Mark Allman
0 siblings, 1 reply; 33+ messages in thread
From: Michael Richardson @ 2013-01-09 20:31 UTC (permalink / raw)
To: mallman; +Cc: bloat
>>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
>> 1) do you max out your 1Gb/s uplink at all?
Mark> No. I do not believe we have seen peaks anywhere close to 1Gbps.
nice.. what amount of oversubscription does this represent?
What is the layer-3 architecture for the CCZ? Does traffic between
residences come through the "head end" the way it does in cable, or does
it cut-across at layer-2, and perhaps, you can not see it?
Assuming that you can see it..
Have you considered isolating the data samples which are from CCZ and to
CCZ, and then tried to predict which flows might involved 802.11b or g
final hops?
You mentioned that CCZ provides a home router... do you know anything
about that?
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-09 20:31 ` Michael Richardson
@ 2013-01-10 18:05 ` Mark Allman
0 siblings, 0 replies; 33+ messages in thread
From: Mark Allman @ 2013-01-10 18:05 UTC (permalink / raw)
To: Michael Richardson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1116 bytes --]
> >>>>> "Mark" == Mark Allman <mallman@icir.org> writes:
> >> 1) do you max out your 1Gb/s uplink at all?
>
> Mark> No. I do not believe we have seen peaks anywhere close to 1Gbps.
>
> nice.. what amount of oversubscription does this represent?
Um, something like "a metric shitload". :-)
The overall FTTH experiment is basically posing the question "what could
we use networks for if we took away the capacity limits?".
> What is the layer-3 architecture for the CCZ? Does traffic between
> residences come through the "head end" the way it does in cable, or
> does it cut-across at layer-2, and perhaps, you can not see it?
The homes run into a switch. Traffic between homes is taken care of
without going further. There is a 1Gbps link out of the switch to the
ISP. We mirror the port that connects the switch to he outside world.
So, we have no visibility of traffic that stays within the CCZ.
> Have you considered isolating the data samples which are from CCZ and
> to CCZ, and then tried to predict which flows might involved 802.11b
> or g final hops?
We have not done that.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 12:44 ` Toke Høiland-Jørgensen
2013-01-08 13:55 ` Mark Allman
@ 2013-01-08 14:04 ` Mark Allman
1 sibling, 0 replies; 33+ messages in thread
From: Mark Allman @ 2013-01-08 14:04 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1086 bytes --]
> graphs, ~5% of connections to "residential" hosts exhibit added delays
> of >=400 milliseconds, a delay that is certainly noticeable and would
> make interactive applications (gaming, voip etc) pretty much unusable.
Note the paper does not work in units of *connections* in section 2, but
rather in terms of *RTT samples*. So, nearly 5% of the RTT samples add
>= 400msec to the base delay measured for the given remote (in the
"residential" case).
(I am not disagreeing that 400msec of added delay would be noticeable.
I am simply stating what the data actually shows.)
> Now, I may be jumping to conclusions here, but I couldn't find anything
> about how their samples were distributed.
(I don't follow this comment ... distributed in what fashion?)
> It would be interesting if a large-scale test like this could flush
> out how big a percentage of hosts do occasionally experience
> bufferbloat, and how many never do.
I agree and this could be done with our data. (In general, we could go
much deeper into the data on hand ... the paper is an initial foray.)
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-08 1:54 ` Stephen Hemminger
2013-01-08 2:15 ` Oliver Hohlfeld
2013-01-08 12:44 ` Toke Høiland-Jørgensen
@ 2013-01-08 17:22 ` Dave Taht
2 siblings, 0 replies; 33+ messages in thread
From: Dave Taht @ 2013-01-08 17:22 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: bloat
Hey, guys, chill.
I'm sorry if my first comment at the paper's dataset sounded overly
sarcastic. I was equally sincere in calling it a "good paper", as the
analysis of the dataset seemed largely sound at first glance... but I
have to think about it for a while a while longer, and hopefully
suggest additional/further lines of research.
I'm glad the Q/A session is taking place here, but I'm terribly behind
on my email in general....
On Mon, Jan 7, 2013 at 5:54 PM, Stephen Hemminger <shemminger@vyatta.com> wrote:
> The tone of the paper is a bit of "if academics don't analyze it to death
> it must not exist". The facts are interesting, but the interpretation ignores
> the human element. If human's perceive delay "Daddy the Internet is slow", then
> they will change their behavior to avoid the problem: "it hurts when I download,
> so I will do it later".
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
2013-01-08 0:33 ` Dave Taht
2013-01-08 1:54 ` Stephen Hemminger
@ 2013-01-09 20:05 ` Michael Richardson
2013-01-09 20:14 ` Mark Allman
2 siblings, 1 reply; 33+ messages in thread
From: Michael Richardson @ 2013-01-09 20:05 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 2023 bytes --]
(not having read the thread yet on purpose)
Reading the paper, my initial thought was: big queues can only happen
where there is a bottleneck, and on the 1Gb/s CCZ links, that's unlikely
to be the case. Then I understood that he is measuring there because
(he can and) really he is measuring the delay to other peers, from a
place that can easily fill the various network queues.
I don't understand the analysis of RTT increases/decreases.
It seems to me, that for a given host pair, there is some RTT
theorectical minimum, which represents a completely empty network, and
perhaps one can observe something close to it in the samples, and maybe
a periodic ICMP ping would have been in order, particularly when the
host pair was not observed to have any traffic flowing.
The question of queuing delay then can be answered by how much higher
the RTT is over some minimum. (And only then, does one begin to ask
questions about GeoIP and speed of photons and speed of modulated
electron wavefronts).
Maybe that's the point of the RTT increase/decrease discussion in
section 2.2
This paper seems to really be about increasing IW.
The conclusion that 7-20% of connections would even benefit from an
increase in IW, and that long lived connections would open their window
anyway, for me, removes the question of bufferbloat from the IW debate.
The conclusion that bloat is <100ms for 50% of samples, and <250ms
for 94% of samples is useful: as the network architect for an commercial
enterprise focused VoIP provider, those numbers are terrifying. I think
the situation is worse, but even if it's as good as reported, we can not
afford an additional 250ms delay in the circuits :-)
now, to read the thread.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: Type: application/pgp-signature, Size: 307 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
@ 2013-01-08 7:35 Ingemar Johansson S
2013-01-18 22:00 ` Haiqing Jiang
0 siblings, 1 reply; 33+ messages in thread
From: Ingemar Johansson S @ 2013-01-08 7:35 UTC (permalink / raw)
To: end2end-interest, bloat
[-- Attachment #1: Type: text/plain, Size: 2083 bytes --]
Hi
Include Mark's original post (below) as it was scrubbed
I don't have an data of bufferbloat for wireline access and the fiber connection that I have at home shows little evidence of bufferbloat.
Wireless access seems to be a different story though.
After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang et al. I decided to make a few measurements of my own (hope that the attached png is not removed)
The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a 3G modem attached.
The throughput was computed from the wireshark logs and RTT was measured with ping (towards a webserver hosted by Akamai). The location is Luleå city centre, Sweden (fixed locations) and the measurement was made at lunchtime on Dec 6 2012 .
During the measurement session I did some close to normal websurf, including watching embedded videoclips and youtube. In some cases the effects of bufferbloat was clearly noticeable.
Admit that this is just one sample, a more elaborate study with more samples would be interesting to see.
3G has the interesting feature that packets are very seldom lost in downlink (data going to the terminal). I did not see a single packet loss in this test!. I wont elaborate on the reasons in this email.
I would however believe that LTE is better off in this respect as long as AQM is implemented, mainly because LTE is a packet-switched architecture.
/Ingemar
Marks post.
********
[I tried to post this in a couple places to ensure I hit folks who would
be interested. If you end up with multiple copies of the email, my
apologies. --allman]
I know bufferbloat has been an interest of lots of folks recently. So,
I thought I'd flog a recent paper that presents a little data on the
topic ...
Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
Communication Review, 43(1), January 2013.
http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
Its an initial paper. I think more data would be great!
allman
--
http://www.icir.org/mallman/
[-- Attachment #2: Bufferbloat-3G.png --]
[-- Type: image/png, Size: 267208 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-08 7:35 [Bloat] bufferbloat paper Ingemar Johansson S
@ 2013-01-18 22:00 ` Haiqing Jiang
0 siblings, 0 replies; 33+ messages in thread
From: Haiqing Jiang @ 2013-01-18 22:00 UTC (permalink / raw)
To: Ingemar Johansson S; +Cc: end2end-interest, bloat
[-- Attachment #1: Type: text/plain, Size: 3310 bytes --]
Hi
It's really happy to know that you are verifying the problem I pointed out
in my paper. It's quite urgent to pay more attentions to the bufferbloat in
CellNet in my opinion.
But because of the lack of connections inside carriers (AT&T, Verizon,
etc.), in my work I still found some limitations to figure out the
fundamental answers to 1). where the buffers exactly are; 2). how the
buffers are built up with interacting with LTE/HSPA/EVDO protocols; 3). how
common it is for large scale daily life usage the problem could lower down
user experiences..... All those problems, I hope to see deeper discussions
in this maillist. Thanks....
Best,
Haiqing Jiang
On Mon, Jan 7, 2013 at 11:35 PM, Ingemar Johansson S <
ingemar.s.johansson@ericsson.com> wrote:
> Hi
>
> Include Mark's original post (below) as it was scrubbed
>
> I don't have an data of bufferbloat for wireline access and the fiber
> connection that I have at home shows little evidence of bufferbloat.
>
> Wireless access seems to be a different story though.
> After reading the "Tackling Bufferbloat in 3G/4G Mobile Networks" by Jiang
> et al. I decided to make a few measurements of my own (hope that the
> attached png is not removed)
>
> The measurement setup was quite simple, a Laptop with Ubuntu 12.04 with a
> 3G modem attached.
> The throughput was computed from the wireshark logs and RTT was measured
> with ping (towards a webserver hosted by Akamai). The location is Luleå
> city centre, Sweden (fixed locations) and the measurement was made at
> lunchtime on Dec 6 2012 .
>
> During the measurement session I did some close to normal websurf,
> including watching embedded videoclips and youtube. In some cases the
> effects of bufferbloat was clearly noticeable.
> Admit that this is just one sample, a more elaborate study with more
> samples would be interesting to see.
>
> 3G has the interesting feature that packets are very seldom lost in
> downlink (data going to the terminal). I did not see a single packet loss
> in this test!. I wont elaborate on the reasons in this email.
> I would however believe that LTE is better off in this respect as long as
> AQM is implemented, mainly because LTE is a packet-switched architecture.
>
> /Ingemar
>
> Marks post.
> ********
> [I tried to post this in a couple places to ensure I hit folks who would
> be interested. If you end up with multiple copies of the email, my
> apologies. --allman]
>
> I know bufferbloat has been an interest of lots of folks recently. So,
> I thought I'd flog a recent paper that presents a little data on the
> topic ...
>
> Mark Allman. Comments on Bufferbloat, ACM SIGCOMM Computer
> Communication Review, 43(1), January 2013.
> http://www.icir.org/mallman/papers/bufferbloat-ccr13.pdf
>
> Its an initial paper. I think more data would be great!
>
> allman
>
>
> --
> http://www.icir.org/mallman/
>
>
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
--
-----------------------------------
Haiqing Jiang,
Computer Science Department, North Carolina State University
Homepage: https://sites.google.com/site/hqjiang1988/
[-- Attachment #2: Type: text/html, Size: 4170 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
@ 2013-01-08 19:03 Hal Murray
2013-01-08 20:28 ` Jonathan Morton
2013-01-09 0:12 ` David Lang
0 siblings, 2 replies; 33+ messages in thread
From: Hal Murray @ 2013-01-08 19:03 UTC (permalink / raw)
To: bloat; +Cc: Hal Murray
> Aside from their dataset having absolutely no reflection on the reality of
> the 99.999% of home users running at speeds two or three or *more* orders of
> magnitude below that speed, it seems like a nice paper.
Did any of their 90 homes contained laptops connected over WiFi?
> Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png) from
> a computer tethered to a Samsung Galaxy Nexus running Android 4.0.4 on
> Verizon LTE service, taken just now in Cambridge, Mass.
Neat. Thanks.
Any ideas on what happened at 120 seconds? Is that a pattern I should
recognize?
Is there an event that triggers it? Is it something as simple as a single
lost packet?
--
These are my opinions. I hate spam.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-08 19:03 Hal Murray
@ 2013-01-08 20:28 ` Jonathan Morton
2013-01-09 0:12 ` David Lang
1 sibling, 0 replies; 33+ messages in thread
From: Jonathan Morton @ 2013-01-08 20:28 UTC (permalink / raw)
To: Hal Murray; +Cc: bloat
On 8 Jan, 2013, at 9:03 pm, Hal Murray wrote:
> Any ideas on what happened at 120 seconds? Is that a pattern I should
> recognize?
That looks to me like the link changed to a slower speed for a few seconds. That can happen pretty much at random in a wireless environment, possibly in response to a statistical fluke on the BER, which in turn might be triggered by a lightning strike a thousand miles away.
- Jonathan Morton
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-08 19:03 Hal Murray
2013-01-08 20:28 ` Jonathan Morton
@ 2013-01-09 0:12 ` David Lang
2013-01-09 1:59 ` Mark Allman
1 sibling, 1 reply; 33+ messages in thread
From: David Lang @ 2013-01-09 0:12 UTC (permalink / raw)
To: Hal Murray; +Cc: bloat
On Tue, 8 Jan 2013, Hal Murray wrote:
>> Aside from their dataset having absolutely no reflection on the reality of
>> the 99.999% of home users running at speeds two or three or *more* orders of
>> magnitude below that speed, it seems like a nice paper.
>
> Did any of their 90 homes contained laptops connected over WiFi?
Almost certinly, but if the connection from the laptop to the AP is 54M and the
connection from the AP to the Internet is 1G, you are not going to have a lot of
buffering taking place. You will have no buffering on the uplink side, and while
you will have some buffering on the downlink side, 54M is your slowest
connection and it takes a significantly large amount of data in flight to fill
that for seconds.
If your 54M wireless link is connected to a 768K DSL uplink (a much more typical
connection), then it's very easy for the uplink side to generate many seconds
worth of queueing delays, both from the high disparity in speeds and from the
fact that the uplink is so slow.
David Lang
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-09 0:12 ` David Lang
@ 2013-01-09 1:59 ` Mark Allman
2013-01-09 4:53 ` David Lang
0 siblings, 1 reply; 33+ messages in thread
From: Mark Allman @ 2013-01-09 1:59 UTC (permalink / raw)
To: David Lang; +Cc: Hal Murray, bloat
[-- Attachment #1: Type: text/plain, Size: 1244 bytes --]
> > Did any of their 90 homes contained laptops connected over WiFi?
>
> Almost certinly,
Yeah - they nearly for sure did. (See the note I sent to bloat@ this
morning.)
> but if the connection from the laptop to the AP is 54M and the
> connection from the AP to the Internet is 1G, you are not going to
> have a lot of buffering taking place. You will have no buffering on
> the uplink side, and while you will have some buffering on the
> downlink side, 54M is your slowest connection and it takes a
> significantly large amount of data in flight to fill that for seconds.
54Mbps *might* be your slowest link. It also could be somewhere before
incoming traffic gets anywhere close to any of the CCZ gear. E.g., if
the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
end of the connection.
But, regardless, none of this matters for the results presented in the
paper because our measurements factor out the local residences. Again,
see the paper and the note I sent this morning. The measurements are
taken between our monitor (which is outside the local homes) and the
remote host somewhere out across the Internet. We are measuring
wide-area and remote-side networks, not the local FTTH network.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-09 1:59 ` Mark Allman
@ 2013-01-09 4:53 ` David Lang
2013-01-09 5:13 ` Jonathan Morton
2013-01-09 5:32 ` Mark Allman
0 siblings, 2 replies; 33+ messages in thread
From: David Lang @ 2013-01-09 4:53 UTC (permalink / raw)
To: Mark Allman; +Cc: Hal Murray, bloat
On Tue, 8 Jan 2013, Mark Allman wrote:
>>> Did any of their 90 homes contained laptops connected over WiFi?
>>
>> Almost certinly,
>
> Yeah - they nearly for sure did. (See the note I sent to bloat@ this
> morning.)
>
>> but if the connection from the laptop to the AP is 54M and the
>> connection from the AP to the Internet is 1G, you are not going to
>> have a lot of buffering taking place. You will have no buffering on
>> the uplink side, and while you will have some buffering on the
>> downlink side, 54M is your slowest connection and it takes a
>> significantly large amount of data in flight to fill that for seconds.
>
> 54Mbps *might* be your slowest link. It also could be somewhere before
> incoming traffic gets anywhere close to any of the CCZ gear. E.g., if
> the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
> end of the connection.
Wait a min here, from everything prior to this it was sounding like you were in
a fiber-to-the-home experimental area that had 1G all the way to the houses, no
DSL involved.
Are we all minunderstanding this?
David Lang
> But, regardless, none of this matters for the results presented in the
> paper because our measurements factor out the local residences. Again,
> see the paper and the note I sent this morning. The measurements are
> taken between our monitor (which is outside the local homes) and the
> remote host somewhere out across the Internet. We are measuring
> wide-area and remote-side networks, not the local FTTH network.
>
> allman
>
>
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-09 4:53 ` David Lang
@ 2013-01-09 5:13 ` Jonathan Morton
2013-01-09 5:32 ` Mark Allman
1 sibling, 0 replies; 33+ messages in thread
From: Jonathan Morton @ 2013-01-09 5:13 UTC (permalink / raw)
To: David Lang; +Cc: Hal Murray, bloat
[-- Attachment #1: Type: text/plain, Size: 1988 bytes --]
I think the point being made here was that the FTTH homes were talking to
DSL hosts via P2P a lot.
- Jonathan Morton
On Jan 9, 2013 6:54 AM, "David Lang" <david@lang.hm> wrote:
> On Tue, 8 Jan 2013, Mark Allman wrote:
>
> Did any of their 90 homes contained laptops connected over WiFi?
>>>>
>>>
>>> Almost certinly,
>>>
>>
>> Yeah - they nearly for sure did. (See the note I sent to bloat@ this
>> morning.)
>>
>> but if the connection from the laptop to the AP is 54M and the
>>> connection from the AP to the Internet is 1G, you are not going to
>>> have a lot of buffering taking place. You will have no buffering on
>>> the uplink side, and while you will have some buffering on the
>>> downlink side, 54M is your slowest connection and it takes a
>>> significantly large amount of data in flight to fill that for seconds.
>>>
>>
>> 54Mbps *might* be your slowest link. It also could be somewhere before
>> incoming traffic gets anywhere close to any of the CCZ gear. E.g., if
>> the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
>> end of the connection.
>>
>
> Wait a min here, from everything prior to this it was sounding like you
> were in a fiber-to-the-home experimental area that had 1G all the way to
> the houses, no DSL involved.
>
> Are we all minunderstanding this?
>
> David Lang
>
> But, regardless, none of this matters for the results presented in the
>> paper because our measurements factor out the local residences. Again,
>> see the paper and the note I sent this morning. The measurements are
>> taken between our monitor (which is outside the local homes) and the
>> remote host somewhere out across the Internet. We are measuring
>> wide-area and remote-side networks, not the local FTTH network.
>>
>> allman
>>
>>
>>
>>
>> ______________________________**_________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/**listinfo/bloat<https://lists.bufferbloat.net/listinfo/bloat>
>
[-- Attachment #2: Type: text/html, Size: 3013 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] bufferbloat paper
2013-01-09 4:53 ` David Lang
2013-01-09 5:13 ` Jonathan Morton
@ 2013-01-09 5:32 ` Mark Allman
1 sibling, 0 replies; 33+ messages in thread
From: Mark Allman @ 2013-01-09 5:32 UTC (permalink / raw)
To: David Lang; +Cc: Hal Murray, bloat
[-- Attachment #1: Type: text/plain, Size: 1650 bytes --]
> >> but if the connection from the laptop to the AP is 54M and the
> >> connection from the AP to the Internet is 1G, you are not going to
> >> have a lot of buffering taking place. You will have no buffering on
> >> the uplink side, and while you will have some buffering on the
> >> downlink side, 54M is your slowest connection and it takes a
> >> significantly large amount of data in flight to fill that for seconds.
> >
> > 54Mbps *might* be your slowest link. It also could be somewhere before
> > incoming traffic gets anywhere close to any of the CCZ gear. E.g., if
> > the traffic is from my DSL line the bottleneck will be < 1Mbps and on my
> > end of the connection.
>
> Wait a min here, from everything prior to this it was sounding like
> you were in a fiber-to-the-home experimental area that had 1G all
> the way to the houses, no DSL involved.
You noted that in the downlink direction (i.e., traffic originating at
some arbitrary place in the network that is *outside* the FTTH network)
would be bottlenecked not by the 1Gbps fiber that runs to the house, but
rather by the final 54Mbps wireless hop. All I am saying is that you
are only half right. We know the bottleneck will not be the 1Gbps
fiber. It *might* be the 54Mbps wireless. Or, it *might* be some other
link at some other point in the Internet before the traffic reaches the
1Gbps fiber that connects the house.
My example is if I originated some traffic at my house (outside the FTTH
network) that was destined for some host on the FTTH network. I can
pump traffic from my house at < 1Mbps. So, that last hop of 54Mbps
cannot be the bottleneck.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
[parent not found: <87r4lvgss4.fsf@toke.dk>]
* Re: [Bloat] Bufferbloat Paper
[not found] <87r4lvgss4.fsf@toke.dk>
@ 2013-01-09 3:39 ` Mark Allman
2013-01-09 5:02 ` David Lang
0 siblings, 1 reply; 33+ messages in thread
From: Mark Allman @ 2013-01-09 3:39 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 1102 bytes --]
> > Note the paper does not work in units of *connections* in section 2, but
> > rather in terms of *RTT samples*. So, nearly 5% of the RTT samples add
> >>= 400msec to the base delay measured for the given remote (in the
> > "residential" case).
>
> Hmm, yes, I was wondering about this and was unable to fully grok it:
> what, exactly, is an RTT sample? :)
One RTT measurement between the CCZ monitoring point and the remote end
host.
> Incidentally, are the data extraction scripts available somewhere?
> Might be worthwhile to distribute them as some kind of tool that
> people with interesting vantage points could apply to get useful data?
Well, they are not readily available. I used a non-public extension to
Bro (that is not mine) to get the RTT samples. So, that is a sticking
point. And, then there is a ball of goop to analyze those. If folks
have a place they can monitor and are interested in doing so, please
contact me. I can probably get this in shape enough to give you. But,
I doubt I'll be able to somehow package this for general consumption any
time soon.
allman
[-- Attachment #2: Type: application/pgp-signature, Size: 194 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-09 3:39 ` [Bloat] Bufferbloat Paper Mark Allman
@ 2013-01-09 5:02 ` David Lang
2013-01-18 1:23 ` grenville armitage
0 siblings, 1 reply; 33+ messages in thread
From: David Lang @ 2013-01-09 5:02 UTC (permalink / raw)
To: bloat
On Tue, 08 Jan 2013 22:39:20 -0500, Mark Allman wrote:
>> > Note the paper does not work in units of *connections* in section
>> 2, but
>> > rather in terms of *RTT samples*. So, nearly 5% of the RTT
>> samples add
>> >>= 400msec to the base delay measured for the given remote (in the
>> > "residential" case).
>>
>> Hmm, yes, I was wondering about this and was unable to fully grok
>> it:
>> what, exactly, is an RTT sample? :)
>
> One RTT measurement between the CCZ monitoring point and the remote
> end
> host.
>
>> Incidentally, are the data extraction scripts available somewhere?
>> Might be worthwhile to distribute them as some kind of tool that
>> people with interesting vantage points could apply to get useful
>> data?
>
> Well, they are not readily available. I used a non-public extension
> to
> Bro (that is not mine) to get the RTT samples. So, that is a
> sticking
> point. And, then there is a ball of goop to analyze those. If folks
> have a place they can monitor and are interested in doing so, please
> contact me. I can probably get this in shape enough to give you.
> But,
> I doubt I'll be able to somehow package this for general consumption
> any
> time soon.
I really like the idea of trying to measure latency by sniffing the
network and watching the time for responses.
If this can work then I think a lot of people would be willing to put a
sniffer inline in their datacenter to measure this.
How specialized is what you are running? can it be made into a
single-use tool that just measures and reports latency?
David Lang
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [Bloat] Bufferbloat Paper
2013-01-09 5:02 ` David Lang
@ 2013-01-18 1:23 ` grenville armitage
0 siblings, 0 replies; 33+ messages in thread
From: grenville armitage @ 2013-01-18 1:23 UTC (permalink / raw)
To: bloat
On 01/09/2013 16:02, David Lang wrote:
.[.]
> I really like the idea of trying to measure latency by sniffing the network and watching the time for responses.
Probably tangential, but http://caia.swin.edu.au/tools/spp/ has proven useful to our group for measuring RTT between two arbitrary packet capture points, for symmetric or asymmetric UDP or TCP traffic flows.
Even more tangential, http://dx.doi.org/10.1109/LCN.2005.101 ("Passive TCP Stream Estimation of RTT and Jitter Parameters") might be an interesting algorithm to implement for estimating RTT of TCP flows seen at a single capture point. (This 2005 paper describes an extension of a technique used by tstat at the time.)
cheers,
gja
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2013-01-18 22:00 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-07 23:37 [Bloat] Bufferbloat Paper Hagen Paul Pfeifer
2013-01-08 0:33 ` Dave Taht
2013-01-08 0:40 ` David Lang
2013-01-08 2:04 ` Mark Watson
2013-01-08 2:24 ` David Lang
2013-01-09 20:08 ` Michael Richardson
2013-01-08 4:52 ` Mark Watson
2013-01-08 1:54 ` Stephen Hemminger
2013-01-08 2:15 ` Oliver Hohlfeld
2013-01-08 12:44 ` Toke Høiland-Jørgensen
2013-01-08 13:55 ` Mark Allman
2013-01-09 0:03 ` David Lang
2013-01-10 13:01 ` Mark Allman
2013-01-09 20:14 ` Michael Richardson
2013-01-09 20:19 ` Mark Allman
2013-01-09 20:31 ` Michael Richardson
2013-01-10 18:05 ` Mark Allman
2013-01-08 14:04 ` Mark Allman
2013-01-08 17:22 ` Dave Taht
2013-01-09 20:05 ` Michael Richardson
2013-01-09 20:14 ` Mark Allman
2013-01-08 7:35 [Bloat] bufferbloat paper Ingemar Johansson S
2013-01-18 22:00 ` Haiqing Jiang
2013-01-08 19:03 Hal Murray
2013-01-08 20:28 ` Jonathan Morton
2013-01-09 0:12 ` David Lang
2013-01-09 1:59 ` Mark Allman
2013-01-09 4:53 ` David Lang
2013-01-09 5:13 ` Jonathan Morton
2013-01-09 5:32 ` Mark Allman
[not found] <87r4lvgss4.fsf@toke.dk>
2013-01-09 3:39 ` [Bloat] Bufferbloat Paper Mark Allman
2013-01-09 5:02 ` David Lang
2013-01-18 1:23 ` grenville armitage
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox