[Cerowrt-devel] Ideas on how to simplify and popularize bufferbloat control for consideration.

Jim Gettys jg at freedesktop.org
Wed May 21 13:47:06 EDT 2014


On Wed, May 21, 2014 at 12:03 PM, <dpreed at reed.com> wrote:

> In reality we don't disagree on this:
>
>
>
> On Wednesday, May 21, 2014 11:19am, "Dave Taht" <dave.taht at gmail.com>
> said:
>
> >
>
> > Well, I disagree somewhat. The downstream shaper we use works quite
> > well, until we run out of cpu at 50mbits. Testing on the ubnt edgerouter
> > has had the inbound shaper work up a little past 100mbits. So there is
> > no need (theoretically) to upgrade the big fat head ends if your cpe is
> > powerful enough to do the job. It would be better if the head ends did
> it,
> > of course....
> >
>
>
>
> There is an advantage for the head-ends doing it, to the extent that each
> edge device has no clarity about what is happening with all the other cpe
> that are sharing that head-end. When there is bloat in the head-end even if
> all cpe's sharing an upward path are shaping themselves to the "up to"
> speed the provider sells, they can go into serious congestion if the
> head-end queues can grow to 1 second or more of sustained queueing delay.
>  My understanding is that head-end queues have more than that.  They
> certainly do in LTE access networks.
>

​I have measured 200ms on a 28Mbps LTE quadrant to a single station.  This
was using the simplest possible test on an idle cell.  Easy to see how that
can grow to the second range.

Similarly, Dave Taht and I took data recently that showed a large
downstream buffer at the CMTS end (line card?), IIRC, it was something like
.25 megabyte, using a UDP flooding tool.

As always, there may be multiple different buffers lurking in these complex
devices, which may only come into play when different parts of them
"bottleneck", just as we found many different buffering locations inside of
Linux.  In fact, some of these devices include Linux boxes (though I do not
know if they are on the packet forwarding path or not).

Bandwidth shaping downstream of those bottlenecks can help, but only to a
degree, and I believe primarily for "well behaved" long lived elephant
flows.  Offload engines on servers and coalescing acks in various equipment
makes the degree of help, particularly for transient behavior such as
opening a bunch of TCP connections simultaneously and downloading the
elements of a web page I believe are likely to put large bursts of packets
into these queues, causing transient poor latency.  I think we'll get a bit
of help out of the packet pacing code that recently went into Linux (for
well behaved servers) as it deploys.  Thanks to Eric Dumazet for that work!
 Ironically, servers get updated much more frequently than these middle
boxes, as far as I can tell.

Somehow we gotta get the bottlenecks in these devices (broadband &
cellular) to behave better.
                                      - Jim


>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20140521/cb8e1b29/attachment-0002.html>


More information about the Cerowrt-devel mailing list