[Bloat] "BBR" TCP patches submitted to linux kernel

Bernd Paysan bernd.paysan at gmail.com
Fri Nov 25 07:51:05 EST 2016


Am Donnerstag, 27. Oktober 2016 20:14:42 UTC+2 schrieb Dave Taht:
>
> On Thu, Oct 27, 2016 at 10:57 AM, Yuchung Cheng <ych... at google.com 
> <javascript:>> wrote: 
> Well, against cubic on the same link in single queue mode, even 
> without ecn, life looks like this: 
>
>
> http://blog.cerowrt.org/flent/bbr-ecncaps/bandwidth-share-creaming-cubic-flowblind-aqm.svg 
>
> but fq_codel is fine, so long as there is no ecn vs nonecn collission 
>
> http://blog.cerowrt.org/flent/bbr-ecncaps/bandwidth-share-ecn-fq.png 

 
Looks like you are on the right track. I've been done some work on 
flow/congestion control on my net2o protocol, which is not limited by TCP's 
constraints, and I think you should look at what I did. See my 31c3 
presentation.

Flow/congestion control discussion starts at 46:00, page 79 on the slides.

https://fossil.net2o.de/net2o/doc/trunk/wiki/31c3.md

The primary algorithm is really simple and straight-forward: I send a short 
burst out, and measure the bandwidth achieved on the receiver side - this 
time is used to calculate the delays between two bursts, the average 
sending rate.  In TCP, you would measure the ACKs back at the sender, and 
calculate bandwidth achieved based on the delays between the acks; that 
should be a bit less precise, but good enough.  These bursts allow to adapt 
to changing loads quickly, and there is no need to fill the buffer at all 
(you don't need to create more increase in RTD than those few packets in 
the burst take).  This primary algorithm is ignorant of the buffer fill 
state, so it works together with a buffer filled up by normal TCP 
congestion control.

I took a great deal at providing fairness even without a fair queuing 
policy (much more complicated second order regulation; I'm still not happy 
with the results and trying to figure out something better), therefore, FQ 
is really necessary, everywhere, including on the last mile routers (see 
timings of 4 concurrent streams on page 108 w/o FQ and 109 w. net2o's own 
FQ).  The spikes in the last diagrams result from the receivers regularly 
writing the data away to disk, and when they do so, temporarily stopping 
the transmission of their stream for a short period of time.  That shows 
how fast the bursts can react on changed bandwidth.  With just one client, 
I can hide that delay with a second thread, but with 4 clients on one CPU, 
it just shows up.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20161125/426461c7/attachment.html>


More information about the Bloat mailing list