[Bloat] TCP BBR paper is now generally available

Klatsky, Carl Carl_Klatsky at comcast.com
Fri Dec 9 09:52:37 EST 2016


In regards to DOCSIS PIE as part of upcoming DOCSIS 3.1 equipment, single queue PIE will first be deployed on the D3.1 cable modem governing the upstream direction.  So the test of BBR & other CCs in the mixed environment would be run with the BBR & other CC sending sources on a server/compute node behind the cable modem sending data upstream to some receiver.  I do not see DOCSIS PIE on the CMTS in the downstream direction in the near term.

Carl Klatsky

From: Bloat [mailto:bloat-bounces at lists.bufferbloat.net] On Behalf Of Yuchung Cheng
Sent: Thursday, December 08, 2016 5:31 PM
To: Neal Cardwell <ncardwell at google.com<mailto:ncardwell at google.com>>; Mikael Abrahamsson <swmike at swm.pp.se<mailto:swmike at swm.pp.se>>
Cc: bloat <bloat at lists.bufferbloat.net<mailto:bloat at lists.bufferbloat.net>>
Subject: Re: [Bloat] TCP BBR paper is now generally available

Also we are aware docsis pie is going to be deployed and we'll specifically test that scenario. With fq this issue is a lot smaller but we understand it is not preferred setting in some aqm for other good reasons.

But to set the expectation right, we are not going to make bbr prefectly flow level fair with cubic or reno. I am happy to argue why that makes no sense. We do want to avoid starvation of either.

On Thu, Dec 8, 2016, 1:29 PM Neal Cardwell <ncardwell at google.com<mailto:ncardwell at google.com>> wrote:
Hi Mikael,

Thanks for your questions. Yes, we do care about how BBR behaves in mixed environments, and particularly in mixed environments with Reno and CUBIC. And we are actively working in this and related areas.

For the ACM Queue article we faced very hard and tight word count constraints, so unfortunately we were not able to go into as much detail as we wanted for the "Competition with Loss-Based Congestion Control" section.

In our recent talk at the ICCRG session at IETF 97 we were able to go into more detail on the question of sharing paths with loss-based CC like Reno and CUBIC (in particular please see slides 22-25):


There is also a video; the BBR section starts around 57:35:

In summary, with the initial BBR release:

o BBR and CUBIC end up with roughly equal shares when there is around 1-2x BDP of FIFO buffer.

o When a FIFO buffer is deeper than that, as everyone on this list well knows, CUBIC/Reno will dump excessive packets in the queue; in such bufferbloated cases BBR will get a slightly lower share of throughput than CUBIC (see slide 23-24). I say "slightly" because BBR's throughput drops off only very gradually, as you can see. And that's because of the dynamic in the passage from the ACM Queue paper you cited: "Even as loss-based congestion control fills the available buffer, ProbeBW still robustly moves the BtlBw estimate toward the flow's fair share, and ProbeRTT finds an RTProp estimate just high enough for tit-for-tat convergence to a fair share." (I guess that last "to" should probably have been "toward".)

o When a buffer is shallower than 1-2x BDP, or has an AQM targeting less than 1-2 BDP of queue, then BBR will tend to end up with a higher share of bandwidth than CUBIC or Reno (I think the tests you were referencing fall into that category). Sometimes that is because the buffer is so shallow that the multiplicative backoff of CUBIC/Reno cause the bottleneck to be underutilized; in such cases then BBR is merely using underutilized bandwidth, and its higher share is a good thing. In more moderately sized buffers in the 0-2x BDP range (or AQM-managed buffers), our active work under way right now (see slide 22) should improve things, based on our experiments in the lab and on YouTube. Basically the two approaches we are currently experimenting with are (1) explicitly trying to more fully drain the queue more often, to try to get much closer to inflight==BDP each gain cycle, and (2) estimate the buffer available to our flow and and modulate the probing magnitude/frequency.

In summary, our #1 priority for BBR right now is reducing queue pressure, in order to reduce delay and packet loss, and improve fairness when sharing paths with loss-based congestion control like CUBIC/Reno.


On Thu, Dec 8, 2016 at 9:01 AM, Mikael Abrahamsson <swmike at swm.pp.se<mailto:swmike at swm.pp.se>> wrote:
On Thu, 8 Dec 2016, Dave Täht wrote:
drop tail works better than any single queue aqm in this scenario.


I see nothing in the BBR paper about how it interoperates with other TCP algorithms. Your text above didn't help me at all.

How is BBR going to be deployed? Is nobody interested how it behaves in a mixed environment?

Mikael Abrahamsson    email: swmike at swm.pp.se<mailto:swmike at swm.pp.se>

Bloat mailing list
Bloat at lists.bufferbloat.net<mailto:Bloat at lists.bufferbloat.net>

Bloat mailing list
Bloat at lists.bufferbloat.net<mailto:Bloat at lists.bufferbloat.net>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20161209/4bd173ee/attachment-0002.html>

More information about the Bloat mailing list