[Bloat] Bufferbloat Paper
David Lang
david at lang.hm
Tue Jan 8 19:03:57 EST 2013
On Tue, 08 Jan 2013 08:55:10 -0500, Mark Allman wrote:
> Let me make a few general comments here ...
>
> (0) The goal is to bring *some* *data* to the conversation. To
> understand the size and scope of bufferbloat problem it seems to
> me
> we need data.
no disagreement here.
> (1) My goal is to make some observations of the queuing (/delay
> variation) in the non-FTTH portion of the network path. As folks
> have pointed out, its unlikely bufferbloat is much of a problem
> in
> the 1Gbps portion of the network I monitor.
>
> (2) The network I am monitoring looks like this ...
>
> LEH -> IHR -> SW -> Internet -> REH
>
> where, "LEH" is the local end host and "IHR" is the in-home
> router
> provided by the FTTH project. The connection between the LEH and
> the IHR can either be wired (at up to 1Gbps) or wireless (at much
> less than 1Gbps, but I forget the actual wireless technology used
> on
> the IHR). The IHRs are all run into a switch (SW) at 1Gbps. The
> switch connects to the Internet via a 1Gbps link (so, this is a
> theoretical bottleneck right here ...). The "REH" is the remote
> end
> host. We monitor via mirroring on SW.
>
> The delay we measure is from SW to REH and back. So, the fact
> that
> this is a 1Gbps environment for local users is really not
> material.
> The REHs are whatever the local users decide to talk to. I have
> no
> idea what the edge bandwidth on the remote side is, but I presume
> it
> is general not a Gbps (especially for the residential set).
>
> So, if you wrote off the paper after the sentence that noted the
> data was collected within an FTTH project, I'd invite you to read
> further.
The issue is that if the home user has a 1G uplink to you, and then you
hae a 1G uplink to the Internet, there is not going to be very much if
any congestion in place. The only place where you are going to have any
buffering is in your 1G uplink to the Internet (and only if there is
enough traffic to cause congestion here)
In the 'normal' residential situation, the LEH -> THR connection is
probably 1G if wired, but the THR -> SW connection is likely to be <1M.
Therefor the THR ends up buffering the outbound traffic.
> (3) This data is not ideal. Ideally I'd like to directly measure
> queues
> in a bazillion places. That'd be fabulous. But, I am working
> with
> what I have. I have traces that offer windows into the actual
> queue
> occupancy when the local users I monitor engage particular remote
> endpoints. Is this representative of the delays I'd find when
> the
> local users are not engaging the remote end system? I have no
> idea. I'd certainly like to know. But, the data doesn't tell
> me.
> I am reporting what I have. It is something. And, it is more
> than
> I have seen reported anywhere else. Folks should go collect more
> data.
>
> (And, note, this is not a knock on the folks---some of them my
> colleagues---who have quite soundly assessed potential queue
> sizes
> by trying to jam as much into the queue as possible and measuring
> the worst case delays. That is well and good. It establishes a
> bound and that there is the potential for problems. But, it does
> not speak to what queue occupancy actually looks like. This
> latter
> is what I am after.)
The biggest problem I had with the paper was that it seemed to be
taking the tone "we measured and didn't find anything in this network,
so bufferbloat is not a real problem"
It may not be a problem in your network, but your network is very
unusual due to the high speed links to the end-users.
Even there, the 400ms delays that you found could be indications of the
problem (how bad their impact is is hard to say. If 5% of the packets
have 400ms latency, that would seem to me to be rather significant. It's
not the collapse that other people have been reporting, but given your
high bandwidth, I wouldn't expect to see that sort of collapse take
place.
David Lang
More information about the Bloat
mailing list