[Bloat] TCP BBR paper is now generally available

Neal Cardwell ncardwell at google.com
Thu Dec 8 16:29:14 EST 2016

Hi Mikael,

Thanks for your questions. Yes, we do care about how BBR behaves in mixed
environments, and particularly in mixed environments with Reno and CUBIC.
And we are actively working in this and related areas.

For the ACM Queue article we faced very hard and tight word count
constraints, so unfortunately we were not able to go into as much detail as
we wanted for the "Competition with Loss-Based Congestion Control" section.

In our recent talk at the ICCRG session at IETF 97 we were able to go into
more detail on the question of sharing paths with loss-based CC like Reno
and CUBIC (in particular please see slides 22-25):


There is also a video; the BBR section starts around 57:35:

In summary, with the initial BBR release:

o BBR and CUBIC end up with roughly equal shares when there is around 1-2x
BDP of FIFO buffer.

o When a FIFO buffer is deeper than that, as everyone on this list well
knows, CUBIC/Reno will dump excessive packets in the queue; in such
bufferbloated cases BBR will get a slightly lower share of throughput than
CUBIC (see slide 23-24). I say "slightly" because BBR's throughput drops
off only very gradually, as you can see. And that's because of the dynamic
in the passage from the ACM Queue paper you cited: "Even as loss-based
congestion control fills the available buffer, ProbeBW still robustly moves
the BtlBw estimate toward the flow's fair share, and ProbeRTT finds an
RTProp estimate just high enough for tit-for-tat convergence to a fair
share." (I guess that last "to" should probably have been "toward".)

o When a buffer is shallower than 1-2x BDP, or has an AQM targeting less
than 1-2 BDP of queue, then BBR will tend to end up with a higher share of
bandwidth than CUBIC or Reno (I think the tests you were referencing fall
into that category). Sometimes that is because the buffer is so shallow
that the multiplicative backoff of CUBIC/Reno cause the bottleneck to be
underutilized; in such cases then BBR is merely using underutilized
bandwidth, and its higher share is a good thing. In more moderately sized
buffers in the 0-2x BDP range (or AQM-managed buffers), our active work
under way right now (see slide 22) should improve things, based on our
experiments in the lab and on YouTube. Basically the two approaches we are
currently experimenting with are (1) explicitly trying to more fully drain
the queue more often, to try to get much closer to inflight==BDP each gain
cycle, and (2) estimate the buffer available to our flow and and modulate
the probing magnitude/frequency.

In summary, our #1 priority for BBR right now is reducing queue pressure,
in order to reduce delay and packet loss, and improve fairness when sharing
paths with loss-based congestion control like CUBIC/Reno.


On Thu, Dec 8, 2016 at 9:01 AM, Mikael Abrahamsson <swmike at swm.pp.se> wrote:

> On Thu, 8 Dec 2016, Dave Täht wrote:
> drop tail works better than any single queue aqm in this scenario.
> *confused*
> I see nothing in the BBR paper about how it interoperates with other TCP
> algorithms. Your text above didn't help me at all.
> How is BBR going to be deployed? Is nobody interested how it behaves in a
> mixed environment?
> --
> Mikael Abrahamsson    email: swmike at swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20161208/2e0908d9/attachment-0002.html>

More information about the Bloat mailing list