[Bloat] Bechtolschiem

Matt Mathis mattmathis at google.com
Fri Jul 2 15:46:05 EDT 2021


The argument is absolutely correct for Reno, CUBIC and all
other self-clocked protocols.  One of the core assumptions in Jacobson88,
was that the clock for the entire system comes from packets draining
through the bottleneck queue.  In this world, the clock is intrinsically
brittle if the buffers are too small.  The drain time needs to be a
substantial fraction of the RTT.

However, we have reached the point where we need to discard that
requirement.  One of the side points of BBR is that in many environments it
is cheaper to burn serving CPU to pace into short queue networks than it is
to "right size" the network queues.

The fundamental problem with the old way is that in some contexts the
buffer memory has to beat Moore's law, because to maintain constant drain
time the memory size and BW both have to scale with the link (laser) BW.

See the slides I gave at the Stanford Buffer Sizing workshop december
2019: Buffer
Sizing: Position Paper
<https://docs.google.com/presentation/d/1VyBlYQJqWvPuGnQpxW4S46asHMmiA-OeMbewxo_r3Cc/edit#slide=id.g791555f04c_0_5>


Note that we are talking about DC and Internet core.  At the edge, BW is
low enough where memory is relatively cheap.   In some sense BB came about
because memory is too cheap in these environments.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
       however our response must be carefully measured:
            too strong would be hypocritical and risks spiraling out of
control;
            too weak risks being mistaken for tacit approval.


On Fri, Jul 2, 2021 at 9:59 AM Stephen Hemminger <stephen at networkplumber.org>
wrote:

> On Fri, 2 Jul 2021 09:42:24 -0700
> Dave Taht <dave.taht at gmail.com> wrote:
>
> > "Debunking Bechtolsheim credibly would get a lot of attention to the
> > bufferbloat cause, I suspect." - dpreed
> >
> > "Why Big Data Needs Big Buffer Switches" -
> >
> http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf
> >
>
> Also, a lot depends on the TCP congestion control algorithm being used.
> They are using NewReno which only researchers use in real life.
>
> Even TCP Cubic has gone through several revisions. In my experience, the
> NS-2 models don't correlate well to real world behavior.
>
> In real world tests, TCP Cubic will consume any buffer it sees at a
> congested link. Maybe that is what they mean by capture effect.
>
> There is also a weird oscillation effect with multiple streams, where one
> flow will take the buffer, then see a packet loss and back off, the
> other flow will take over the buffer until it sees loss.
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20210702/9d078087/attachment-0001.html>


More information about the Cerowrt-devel mailing list