On Tue, Dec 6, 2016 at 12:20 PM, Steinar H. Gunderson <sgunderson@bigfoot.com> wrote:
On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686

I upgraded to 4.7.0 (newest backport available). I can get up to ~45 MB/sec,
but it seems to hover more around ~22 MB/sec in this test:

  http://storage.sesse.net/bbr-4.7.0.pcap

Thanks for the report, Steinar. Can you please clarify whether the BBR behavior you are seeing is a regression vs CUBIC's behavior, or is just mysterious?

It's hard to tell from a receiver-side trace, but this looks to me like a send buffer limitation. The RTT looks like about 50ms, and the bandwidth is a little over 500 Mbps, so the BDP is a little over 3 Mbytes. Looks like most RTTs have a flight of about 2 MBytes of data, followed by a silence suggesting perhaps the sender ran out of buffered data to send. (Screen shot attached.)

What are your net.core.wmem_max and net.ipv4.tcp_wmem settings on the server sending the data?

What happens if you try a bigger wmem cap, like 16 MBytes:

  sysctl -w net.core.wmem_max=16777216 net.ipv4.tcp_wmem='4096 16384 16777216'

If you happen to have access, it would be great to get a sender-side tcpdump trace for both BBR and CUBIC.

Thanks for all your test reports!

cheers,
neal