* [Bloat] TCP BBR paper is now generally available
@ 2016-12-02 15:52 Dave Taht
2016-12-02 19:15 ` Aaron Wood
2016-12-08 8:24 ` Mikael Abrahamsson
0 siblings, 2 replies; 37+ messages in thread
From: Dave Taht @ 2016-12-02 15:52 UTC (permalink / raw)
To: bloat, aqm
http://queue.acm.org/detail.cfm?id=3022184
--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 15:52 [Bloat] TCP BBR paper is now generally available Dave Taht
@ 2016-12-02 19:15 ` Aaron Wood
2016-12-02 20:32 ` Jonathan Morton
2016-12-08 8:24 ` Mikael Abrahamsson
1 sibling, 1 reply; 37+ messages in thread
From: Aaron Wood @ 2016-12-02 19:15 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat, aqm
[-- Attachment #1: Type: text/plain, Size: 1353 bytes --]
This is really fascinating reading.
The following made me stop for a second, though:
"The bucket is typically full at connection startup so BBR learns the
underlying network's BtlBw, but once the bucket empties, all packets sent
faster than the (much lower than BtlBw) bucket fill rate are dropped. BBR
eventually learns this new delivery rate, but the ProbeBW gain cycle
results in continuous moderate losses. To minimize the upstream bandwidth
waste and application latency increase from these losses, we added policer
detection and an explicit policer model to BBR."
So, how is this likely to be playing with our qos_scripts and with cake?
Given we have people from both Google and qos_scripts/cake development
here, do we need to compare some notes on how these interact? Are there
settings in the HTB setup used by qos_scripts that will make it play more
nicely with BBR (smaller quantums, smaller burst sizes, etc)?
-Aaron
On Fri, Dec 2, 2016 at 7:52 AM, Dave Taht <dave.taht@gmail.com> wrote:
> http://queue.acm.org/detail.cfm?id=3022184
>
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 2168 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 19:15 ` Aaron Wood
@ 2016-12-02 20:32 ` Jonathan Morton
2016-12-02 22:22 ` Neal Cardwell
0 siblings, 1 reply; 37+ messages in thread
From: Jonathan Morton @ 2016-12-02 20:32 UTC (permalink / raw)
To: Aaron Wood; +Cc: Dave Taht, aqm, bloat
> On 2 Dec, 2016, at 21:15, Aaron Wood <woody77@gmail.com> wrote:
>
> So, how is this likely to be playing with our qos_scripts and with cake?
Cake’s deficit-mode shaper behaves fairly closely like an ideal constant-throughput link, which is what BBR is supposedly designed for. I haven’t read that far in the paper yet, but it shouldn’t trigger any “bucket detection” algorithms, because it doesn’t have a “bucket”. It is capable of bursting, but only to the minimum extent required to reconcile required throughput with timer resolution and scheduling latency; I’ve tested it with millisecond timers.
The older schemes involving HTB and HFSC *do* have token-bucket behaviour, with an explicitly configured burst size (this excess traffic will collect in downstream buffers). However, these are shapers, not policers, so they will start delaying packets (leaving them in child qdiscs) when the bucket is empty, not simply dropping them.
The interaction with AQM-related marking and dropping will be interesting to read, though. It’s not a-priori obvious how much a shaper-AQM combination looks like a policer.
- Jonathan Morton
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 20:32 ` Jonathan Morton
@ 2016-12-02 22:22 ` Neal Cardwell
2016-12-02 22:40 ` Steinar H. Gunderson
0 siblings, 1 reply; 37+ messages in thread
From: Neal Cardwell @ 2016-12-02 22:22 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Aaron Wood, bloat, aqm
[-- Attachment #1: Type: text/plain, Size: 1205 bytes --]
On Fri, Dec 2, 2016 at 3:32 PM, Jonathan Morton <chromatix99@gmail.com>
wrote:
>
> > On 2 Dec, 2016, at 21:15, Aaron Wood <woody77@gmail.com> wrote:
> >
> > So, how is this likely to be playing with our qos_scripts and with cake?
>
> Cake’s deficit-mode shaper behaves fairly closely like an ideal
> constant-throughput link, which is what BBR is supposedly designed for.
Great. Yes, that's right: BBR's favorite case is a constant-throughput link
or shaper, since that's the easiest to model.
> I haven’t read that far in the paper yet, but it shouldn’t trigger any
> “bucket detection” algorithms, because it doesn’t have a “bucket”. It is
> capable of bursting, but only to the minimum extent required to reconcile
> required throughput with timer resolution and scheduling latency; I’ve
> tested it with millisecond timers.
>
That's also good to hear. If it doesn't have a "bucket" or allow
unsustainable bursts, then it should work well with BBR, and shouldn't
trigger the long-term/policer model.
Of course, if we find important use cases that don't work with BBR, we will
see what we can do to make BBR work well with them.
cheers,
neal
[-- Attachment #2: Type: text/html, Size: 1828 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 22:22 ` Neal Cardwell
@ 2016-12-02 22:40 ` Steinar H. Gunderson
2016-12-02 23:31 ` Eric Dumazet
0 siblings, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-02 22:40 UTC (permalink / raw)
To: Neal Cardwell; +Cc: Jonathan Morton, aqm, bloat
On Fri, Dec 02, 2016 at 05:22:23PM -0500, Neal Cardwell wrote:
> Of course, if we find important use cases that don't work with BBR, we will
> see what we can do to make BBR work well with them.
I have one thing that I _wonder_ if could be BBR's fault: I run backup over
SSH. (That would be tar + gzip + ssh.) The first full backup after I rolled
out BBR on the server (the one sending the data) suddenly was very slow
(~50 Mbit/sec); there was plenty of free I/O, and neither tar nor gzip
(well, pigz) used a full core. My only remaining explanation would be that
somehow, BBR didn't deal well with the irregular stream of data coming from
tar. (A wget between the same machines at the same time gave 6-700 Mbit/sec.)
I will not really blame BBR here, since I didn't take a tcpdump or have time
to otherwise debug properly (short of eliminating the other things I already
mentioned); most likely, it's something else. But if you've ever heard of
others with similar issues, consider this a second report. :-)
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 22:40 ` Steinar H. Gunderson
@ 2016-12-02 23:31 ` Eric Dumazet
2016-12-03 13:03 ` Neal Cardwell
0 siblings, 1 reply; 37+ messages in thread
From: Eric Dumazet @ 2016-12-02 23:31 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Neal Cardwell, Jonathan Morton, aqm, bloat
On Fri, 2016-12-02 at 23:40 +0100, Steinar H. Gunderson wrote:
> On Fri, Dec 02, 2016 at 05:22:23PM -0500, Neal Cardwell wrote:
> > Of course, if we find important use cases that don't work with BBR, we will
> > see what we can do to make BBR work well with them.
>
> I have one thing that I _wonder_ if could be BBR's fault: I run backup over
> SSH. (That would be tar + gzip + ssh.) The first full backup after I rolled
> out BBR on the server (the one sending the data) suddenly was very slow
> (~50 Mbit/sec); there was plenty of free I/O, and neither tar nor gzip
> (well, pigz) used a full core. My only remaining explanation would be that
> somehow, BBR didn't deal well with the irregular stream of data coming from
> tar. (A wget between the same machines at the same time gave 6-700 Mbit/sec.)
>
> I will not really blame BBR here, since I didn't take a tcpdump or have time
> to otherwise debug properly (short of eliminating the other things I already
> mentioned); most likely, it's something else. But if you've ever heard of
> others with similar issues, consider this a second report. :-)
>
> /* Steinar */
It would be interesting to get the chrono stats for the TCP flow, with
an updated ss/iproute2 command and the kernel patches :
efd90174167530c67a54273fd5d8369c87f9bd32 tcp: export sender limits chronographs to TCP_INFO
b0f71bd3e190df827d25d7f19bf09037567f14b7 tcp: instrument how long TCP is limited by insufficient send buffer
5615f88614a47d2b802e1d14d31b623696109276 tcp: instrument how long TCP is limited by receive window
0f87230d1a6c253681550c6064715d06a32be73d tcp: instrument how long TCP is busy sending
05b055e89121394058c75dc354e9a46e1e765579 tcp: instrument tcp sender limits chronographs
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 23:31 ` Eric Dumazet
@ 2016-12-03 13:03 ` Neal Cardwell
2016-12-03 19:13 ` Steinar H. Gunderson
0 siblings, 1 reply; 37+ messages in thread
From: Neal Cardwell @ 2016-12-03 13:03 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Steinar H. Gunderson, Jonathan Morton, aqm, bloat
Thanks for the report, Steinar. This is the first report we've had
like this, but it would be interesting to find out what's going on.
Even if you don't have time to apply the patches Eric mentions, it
would be hugely useful if the next time you have a slow transfer like
that you could post a link to a tcpdump packet capture (headers only
is best, say -s 120). Ideally the trace would capture a whole
connection, so we can see the wscale on the SYN exchange.
thanks,
neal
On Fri, Dec 2, 2016 at 6:31 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Fri, 2016-12-02 at 23:40 +0100, Steinar H. Gunderson wrote:
>> On Fri, Dec 02, 2016 at 05:22:23PM -0500, Neal Cardwell wrote:
>> > Of course, if we find important use cases that don't work with BBR, we will
>> > see what we can do to make BBR work well with them.
>>
>> I have one thing that I _wonder_ if could be BBR's fault: I run backup over
>> SSH. (That would be tar + gzip + ssh.) The first full backup after I rolled
>> out BBR on the server (the one sending the data) suddenly was very slow
>> (~50 Mbit/sec); there was plenty of free I/O, and neither tar nor gzip
>> (well, pigz) used a full core. My only remaining explanation would be that
>> somehow, BBR didn't deal well with the irregular stream of data coming from
>> tar. (A wget between the same machines at the same time gave 6-700 Mbit/sec.)
>>
>> I will not really blame BBR here, since I didn't take a tcpdump or have time
>> to otherwise debug properly (short of eliminating the other things I already
>> mentioned); most likely, it's something else. But if you've ever heard of
>> others with similar issues, consider this a second report. :-)
>>
>> /* Steinar */
>
> It would be interesting to get the chrono stats for the TCP flow, with
> an updated ss/iproute2 command and the kernel patches :
>
> efd90174167530c67a54273fd5d8369c87f9bd32 tcp: export sender limits chronographs to TCP_INFO
> b0f71bd3e190df827d25d7f19bf09037567f14b7 tcp: instrument how long TCP is limited by insufficient send buffer
> 5615f88614a47d2b802e1d14d31b623696109276 tcp: instrument how long TCP is limited by receive window
> 0f87230d1a6c253681550c6064715d06a32be73d tcp: instrument how long TCP is busy sending
> 05b055e89121394058c75dc354e9a46e1e765579 tcp: instrument tcp sender limits chronographs
>
>
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 13:03 ` Neal Cardwell
@ 2016-12-03 19:13 ` Steinar H. Gunderson
2016-12-03 20:20 ` Eric Dumazet
2016-12-07 16:28 ` Alan Jenkins
0 siblings, 2 replies; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-03 19:13 UTC (permalink / raw)
To: Neal Cardwell; +Cc: Eric Dumazet, Jonathan Morton, aqm, bloat
On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
> Thanks for the report, Steinar. This is the first report we've had
> like this, but it would be interesting to find out what's going on.
>
> Even if you don't have time to apply the patches Eric mentions, it
> would be hugely useful if the next time you have a slow transfer like
> that you could post a link to a tcpdump packet capture (headers only
> is best, say -s 120). Ideally the trace would capture a whole
> connection, so we can see the wscale on the SYN exchange.
I tried reproducing it now. I can't get as far down as 50 Mbit/sec,
but it stopped around 100 Mbit/sec, still without any clear bottlenecks.
cubic was just as bad, though.
I've taken two tcpdumps as requested; I can't reboot this server easily
right now, unfortunately. They are:
http://storage.sesse.net/bbr.pcap -- ssh+tar+gnupg
http://storage.sesse.net/bbr2.pcap -- wget between same hosts
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 19:13 ` Steinar H. Gunderson
@ 2016-12-03 20:20 ` Eric Dumazet
2016-12-03 20:26 ` Jonathan Morton
2016-12-03 21:33 ` Steinar H. Gunderson
2016-12-07 16:28 ` Alan Jenkins
1 sibling, 2 replies; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 20:20 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Neal Cardwell, Jonathan Morton, aqm, bloat
On Sat, 2016-12-03 at 20:13 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
> > Thanks for the report, Steinar. This is the first report we've had
> > like this, but it would be interesting to find out what's going on.
> >
> > Even if you don't have time to apply the patches Eric mentions, it
> > would be hugely useful if the next time you have a slow transfer like
> > that you could post a link to a tcpdump packet capture (headers only
> > is best, say -s 120). Ideally the trace would capture a whole
> > connection, so we can see the wscale on the SYN exchange.
>
> I tried reproducing it now. I can't get as far down as 50 Mbit/sec,
> but it stopped around 100 Mbit/sec, still without any clear bottlenecks.
> cubic was just as bad, though.
>
> I've taken two tcpdumps as requested; I can't reboot this server easily
> right now, unfortunately. They are:
>
> http://storage.sesse.net/bbr.pcap -- ssh+tar+gnupg
> http://storage.sesse.net/bbr2.pcap -- wget between same hosts
>
> /* Steinar */
Hi Steinar
Huge ACK decimation it seems.
11:13:28.762772 IP6 S > C: Flags [.], seq 777672:779100, ack 3432, win 179, options [nop,nop,TS val 3498278784 ecr 864447699], length 1428
11:13:28.763145 IP6 S > C: Flags [.], seq 779100:783384, ack 3432, win 179, options [nop,nop,TS val 3498278784 ecr 864447699], length 4284
11:13:28.763190 IP6 S > C: Flags [.], seq 783384:789096, ack 3432, win 179, options [nop,nop,TS val 3498278784 ecr 864447699], length 5712
11:13:28.763239 IP6 S > C: Flags [.], seq 789096:790524, ack 3432, win 179, options [nop,nop,TS val 3498278784 ecr 864447699], length 1428
11:13:28.763334 IP6 S > C: Flags [.], seq 790524:791952, ack 3432, win 179, options [nop,nop,TS val 3498278784 ecr 864447699], length 1428
11:13:28.764109 IP6 S > C: Flags [.], seq 791952:794808, ack 3432, win 179, options [nop,nop,TS val 3498278785 ecr 864447699], length 2856
11:13:28.764138 IP6 S > C: Flags [.], seq 794808:800520, ack 3432, win 179, options [nop,nop,TS val 3498278785 ecr 864447699], length 5712
11:13:28.764189 IP6 S > C: Flags [.], seq 800520:804804, ack 3432, win 179, options [nop,nop,TS val 3498278785 ecr 864447699], length 4284
11:13:28.764980 IP6 S > C: Flags [.], seq 804804:806232, ack 3432, win 179, options [nop,nop,TS val 3498278787 ecr 864447700], length 1428
11:13:28.765034 IP6 S > C: Flags [.], seq 806232:811944, ack 3432, win 179, options [nop,nop,TS val 3498278787 ecr 864447700], length 5712
11:13:28.765086 IP6 S > C: Flags [.], seq 811944:817656, ack 3432, win 179, options [nop,nop,TS val 3498278787 ecr 864447700], length 5712
11:13:28.765905 IP6 S > C: Flags [.], seq 817656:823368, ack 3432, win 179, options [nop,nop,TS val 3498278787 ecr 864447700], length 5712
11:13:28.765956 IP6 S > C: Flags [.], seq 823368:829080, ack 3432, win 179, options [nop,nop,TS val 3498278787 ecr 864447700], length 5712
11:13:28.766005 IP6 S > C: Flags [.], seq 829080:831936, ack 3432, win 179, options [nop,nop,TS val 3498278787 ecr 864447700], length 2856
11:13:28.766869 IP6 S > C: Flags [.], seq 831936:834792, ack 3460, win 179, options [nop,nop,TS val 3498278789 ecr 864447700], length 2856
11:13:28.766898 IP6 S > C: Flags [.], seq 834792:840504, ack 3460, win 179, options [nop,nop,TS val 3498278789 ecr 864447700], length 5712
11:13:28.766947 IP6 S > C: Flags [.], seq 840504:841932, ack 3460, win 179, options [nop,nop,TS val 3498278789 ecr 864447700], length 1428
11:13:28.766997 IP6 S > C: Flags [.], seq 841932:843360, ack 3460, win 179, options [nop,nop,TS val 3498278789 ecr 864447700], length 1428
11:13:28.767532 IP6 C > S: Flags [.], ack 783384, win 4106, options [nop,nop,TS val 864447710 ecr 3498278784], length 0
11:13:28.767541 IP6 C > S: Flags [.], ack 789096, win 4151, options [nop,nop,TS val 864447710 ecr 3498278784], length 0
11:13:28.767543 IP6 C > S: Flags [.], ack 791952, win 4173, options [nop,nop,TS val 864447710 ecr 3498278784], length 0
11:13:28.767544 IP6 C > S: Flags [.], ack 794808, win 4195, options [nop,nop,TS val 864447710 ecr 3498278785], length 0
11:13:28.767546 IP6 C > S: Flags [.], ack 800520, win 4240, options [nop,nop,TS val 864447710 ecr 3498278785], length 0
11:13:28.767547 IP6 C > S: Flags [.], ack 804804, win 4273, options [nop,nop,TS val 864447710 ecr 3498278785], length 0
11:13:28.767549 IP6 C > S: Flags [.], ack 811944, win 4329, options [nop,nop,TS val 864447710 ecr 3498278787], length 0
11:13:28.767550 IP6 C > S: Flags [.], ack 817656, win 4374, options [nop,nop,TS val 864447710 ecr 3498278787], length 0
11:13:28.767552 IP6 C > S: Flags [.], ack 823368, win 4418, options [nop,nop,TS val 864447711 ecr 3498278787], length 0
11:13:28.767553 IP6 C > S: Flags [.], ack 829080, win 4463, options [nop,nop,TS val 864447711 ecr 3498278787], length 0
11:13:28.767554 IP6 C > S: Flags [.], ack 831936, win 4485, options [nop,nop,TS val 864447711 ecr 3498278787], length 0
11:13:28.767556 IP6 C > S: Flags [.], ack 834792, win 4508, options [nop,nop,TS val 864447711 ecr 3498278789], length 0
11:13:28.767557 IP6 C > S: Flags [.], ack 840504, win 4552, options [nop,nop,TS val 864447711 ecr 3498278789], length 0
11:13:28.767559 IP6 C > S: Flags [.], ack 843360, win 4575, options [nop,nop,TS val 864447711 ecr 3498278789], length 0
Just to be clear,what is the kernel version at the sender ?
Thanks !
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 20:20 ` Eric Dumazet
@ 2016-12-03 20:26 ` Jonathan Morton
2016-12-03 21:07 ` Eric Dumazet
2016-12-03 21:33 ` Steinar H. Gunderson
1 sibling, 1 reply; 37+ messages in thread
From: Jonathan Morton @ 2016-12-03 20:26 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Steinar H. Gunderson, Neal Cardwell, aqm, bloat
> On 3 Dec, 2016, at 22:20, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> Huge ACK decimation it seems.
That extract does not show ACK decimation. It shows either jumbo frames or offload aggregation in a send burst, and ordinary delayed-acks each covering at most two packets received. Nothing particularly weird or unusual appears to be happening in the network.
- Jonathan Morton
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 20:26 ` Jonathan Morton
@ 2016-12-03 21:07 ` Eric Dumazet
2016-12-03 21:34 ` Steinar H. Gunderson
2016-12-03 21:38 ` Jonathan Morton
0 siblings, 2 replies; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 21:07 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Steinar H. Gunderson, Neal Cardwell, aqm, bloat
On Sat, 2016-12-03 at 22:26 +0200, Jonathan Morton wrote:
> > On 3 Dec, 2016, at 22:20, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> >
> > Huge ACK decimation it seems.
>
> That extract does not show ACK decimation. It shows either jumbo
> frames or offload aggregation in a send burst, and ordinary
> delayed-acks each covering at most two packets received. Nothing
> particularly weird or unusual appears to be happening in the network.
I do not attend IETF meetings so maybe my words are not exact.
What I meant was that we receive ACKS in bursts, with huge gaps between
them.
Look at tsval/tsecr, and that all ACKS are received in a 20 usec time
window, covering data that was sent in a 5 ms window.
Looks the reverse path is severely congested or a device does ack
filtering.
11:14:05.468643 IP6 C > S: Flags [.], ack 364280308, win 23430, options [nop,nop,TS val 864456877 ecr 3498315454], length 0
11:14:05.468645 IP6 C > S: Flags [.], ack 364288528, win 23407, options [nop,nop,TS val 864456877 ecr 3498315454], length 0
11:14:05.468646 IP6 C > S: Flags [.], ack 364296748, win 23415, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
11:14:05.468648 IP6 C > S: Flags [.], ack 364302460, win 23400, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
11:14:05.468650 IP6 C > S: Flags [.], ack 364304968, win 23430, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
<huge gap>
11:14:05.510870 IP6 C > S: Flags [.], ack 364313188, win 23407, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
11:14:05.510885 IP6 C > S: Flags [.], ack 364321408, win 23415, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
11:14:05.510888 IP6 C > S: Flags [.], ack 364328548, win 23396, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
11:14:05.510890 IP6 C > S: Flags [.], ack 364335340, win 23419, options [nop,nop,TS val 864456877 ecr 3498315455], length 0
11:14:05.510892 IP6 C > S: Flags [.], ack 364337848, win 23430, options [nop,nop,TS val 864456877 ecr 3498315456], length 0
11:14:05.510894 IP6 C > S: Flags [.], ack 364353208, win 23389, options [nop,nop,TS val 864456877 ecr 3498315456], length 0
11:14:05.510895 IP6 C > S: Flags [.], ack 364362508, win 23411, options [nop,nop,TS val 864456877 ecr 3498315456], length 0
11:14:05.510897 IP6 C > S: Flags [.], ack 364370728, win 23415, options [nop,nop,TS val 864456877 ecr 3498315456], length 0
11:14:05.510899 IP6 C > S: Flags [.], ack 364378948, win 23419, options [nop,nop,TS val 864456877 ecr 3498315456], length 0
11:14:05.510900 IP6 C > S: Flags [.], ack 364384660, win 23404, options [nop,nop,TS val 864456877 ecr 3498315457], length 0
11:14:05.510902 IP6 C > S: Flags [.], ack 364387168, win 23430, options [nop,nop,TS val 864456877 ecr 3498315457], length 0
11:14:05.510904 IP6 C > S: Flags [.], ack 364394308, win 23411, options [nop,nop,TS val 864456877 ecr 3498315457], length 0
11:14:05.510905 IP6 C > S: Flags [.], ack 364403608, win 23411, options [nop,nop,TS val 864456877 ecr 3498315457], length 0
11:14:05.510907 IP6 C > S: Flags [.], ack 364411828, win 23419, options [nop,nop,TS val 864456878 ecr 3498315457], length 0
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 20:20 ` Eric Dumazet
2016-12-03 20:26 ` Jonathan Morton
@ 2016-12-03 21:33 ` Steinar H. Gunderson
1 sibling, 0 replies; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-03 21:33 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Neal Cardwell, Jonathan Morton, aqm, bloat
On Sat, Dec 03, 2016 at 12:20:15PM -0800, Eric Dumazet wrote:
> Just to be clear,what is the kernel version at the sender ?
4.9.0-rc2.
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 21:07 ` Eric Dumazet
@ 2016-12-03 21:34 ` Steinar H. Gunderson
2016-12-03 21:50 ` Eric Dumazet
2016-12-03 21:38 ` Jonathan Morton
1 sibling, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-03 21:34 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, Dec 03, 2016 at 01:07:40PM -0800, Eric Dumazet wrote:
> What I meant was that we receive ACKS in bursts, with huge gaps between
> them.
Note, the tcpdump is done at the receiver. I don't know if this changes the
analysis.
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 21:07 ` Eric Dumazet
2016-12-03 21:34 ` Steinar H. Gunderson
@ 2016-12-03 21:38 ` Jonathan Morton
1 sibling, 0 replies; 37+ messages in thread
From: Jonathan Morton @ 2016-12-03 21:38 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Steinar H. Gunderson, Neal Cardwell, aqm, bloat
> On 3 Dec, 2016, at 23:07, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> I do not attend IETF meetings so maybe my words are not exact.
>
> What I meant was that we receive ACKS in bursts, with huge gaps between
> them.
>
> Look at tsval/tsecr, and that all ACKS are received in a 20 usec time
> window, covering data that was sent in a 5 ms window.
Nevertheless, there are no acks obviously *missing*. Decimation would be aggregation of acks such that all but the last (over some interval) is dropped. Here, they are merely delivered in rapid succession.
It could be that the receiver has poor receive-interrupt latency, or that it is aggregating the reverse send path too heavily. The variable spacing within the ack burst (in particular, in both examples the gap between first and second is larger than the rest) favours the former explanation.
- Jonathan Morton
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 21:34 ` Steinar H. Gunderson
@ 2016-12-03 21:50 ` Eric Dumazet
2016-12-03 22:13 ` Steinar H. Gunderson
0 siblings, 1 reply; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 21:50 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, 2016-12-03 at 22:34 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 01:07:40PM -0800, Eric Dumazet wrote:
> > What I meant was that we receive ACKS in bursts, with huge gaps between
> > them.
>
> Note, the tcpdump is done at the receiver. I don't know if this changes the
> analysis.
If you have access to the receiver, I would be interested to know
NIC/driver used there ?
Thanks
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 21:50 ` Eric Dumazet
@ 2016-12-03 22:13 ` Steinar H. Gunderson
2016-12-03 22:55 ` Eric Dumazet
0 siblings, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-03 22:13 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, Dec 03, 2016 at 01:50:37PM -0800, Eric Dumazet wrote:
>> Note, the tcpdump is done at the receiver. I don't know if this changes the
>> analysis.
> If you have access to the receiver, I would be interested to know
> NIC/driver used there ?
root@blackhole:~# lspci | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
root@blackhole:~# lspci -n | grep 01:00.0
01:00.0 0200: 8086:10d3
root@blackhole:~# ls -l /sys/class/net/eth0/device/driver
lrwxrwxrwx 1 root root 0 des. 3 23:17 /sys/class/net/eth0/device/driver -> ../../../../bus/pci/drivers/e1000e
root@blackhole:~# uname -a
Linux blackhole 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
Do note, both eth0 and eth1 are a bridge, although only one of the cards are
actually connected to anything (it's just so the remote hands can connect to
any port and things will come up fine).
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 22:13 ` Steinar H. Gunderson
@ 2016-12-03 22:55 ` Eric Dumazet
2016-12-03 23:02 ` Eric Dumazet
2016-12-03 23:03 ` Steinar H. Gunderson
0 siblings, 2 replies; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 22:55 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, 2016-12-03 at 23:13 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 01:50:37PM -0800, Eric Dumazet wrote:
> >> Note, the tcpdump is done at the receiver. I don't know if this changes the
> >> analysis.
> > If you have access to the receiver, I would be interested to know
> > NIC/driver used there ?
>
> root@blackhole:~# lspci | grep Ethernet
> 01:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
> 02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>
> root@blackhole:~# lspci -n | grep 01:00.0
> 01:00.0 0200: 8086:10d3
>
> root@blackhole:~# ls -l /sys/class/net/eth0/device/driver
> lrwxrwxrwx 1 root root 0 des. 3 23:17 /sys/class/net/eth0/device/driver -> ../../../../bus/pci/drivers/e1000e
>
> root@blackhole:~# uname -a
> Linux blackhole 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
>
> Do note, both eth0 and eth1 are a bridge, although only one of the cards are
> actually connected to anything (it's just so the remote hands can connect to
> any port and things will come up fine).
Perfect.
Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
would help this precise workload
https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
Maybe you can redo the experiment in ~5 years when distro catches up ;)
Thanks.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 22:55 ` Eric Dumazet
@ 2016-12-03 23:02 ` Eric Dumazet
2016-12-03 23:09 ` Eric Dumazet
2016-12-03 23:03 ` Steinar H. Gunderson
1 sibling, 1 reply; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 23:02 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, 2016-12-03 at 14:55 -0800, Eric Dumazet wrote:
> Perfect.
>
> Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> would help this precise workload
Also it appears the sender uses a lot of relatively small segments (8220
bytes at a time), with PSH, so GRO wont be able to help.
I wonder how these PSH are forced.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 22:55 ` Eric Dumazet
2016-12-03 23:02 ` Eric Dumazet
@ 2016-12-03 23:03 ` Steinar H. Gunderson
2016-12-03 23:15 ` Eric Dumazet
1 sibling, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-03 23:03 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> would help this precise workload
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
>
> Maybe you can redo the experiment in ~5 years when distro catches up ;)
I can always find a backport, assuming the IPMI is still working. But it
really wasn't like this earlier :-) Perhaps something changed on the path,
unrelated to BBR.
(PS: stretch currently ships with 4.8, and is slated for freeze in February.
So perhaps not _all_ of five years ;-) )
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 23:02 ` Eric Dumazet
@ 2016-12-03 23:09 ` Eric Dumazet
0 siblings, 0 replies; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 23:09 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, 2016-12-03 at 15:02 -0800, Eric Dumazet wrote:
> On Sat, 2016-12-03 at 14:55 -0800, Eric Dumazet wrote:
>
> > Perfect.
> >
> > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > would help this precise workload
>
> Also it appears the sender uses a lot of relatively small segments (8220
> bytes at a time), with PSH, so GRO wont be able to help.
>
> I wonder how these PSH are forced.
One possibility would be that the sender is not using fq/pacing,
or some driver is using skb_orphan(), killing TCP Small queue.
-> No pressure on qdisc, auto corking does not trigger,
and application does small writes.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 23:03 ` Steinar H. Gunderson
@ 2016-12-03 23:15 ` Eric Dumazet
2016-12-03 23:24 ` Eric Dumazet
0 siblings, 1 reply; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 23:15 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > would help this precise workload
> >
> > https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
> >
> > Maybe you can redo the experiment in ~5 years when distro catches up ;)
>
> I can always find a backport, assuming the IPMI is still working. But it
> really wasn't like this earlier :-) Perhaps something changed on the path,
> unrelated to BBR.
If the tcpdump is taken on receiver, how would you explain these gaps ?
Like the NIC was frozen for 40 ms !
11:14:06.023821 IP6 C > S: Flags [.], ack 368888520, win 23422, options [nop,nop,TS val 864457007 ecr 3498315976], length 0
11:14:06.023823 IP6 C > S: Flags [.], ack 368891028, win 23430, options [nop,nop,TS val 864457007 ecr 3498315976], length 0
11:14:06.023829 IP6 C > S: Flags [.], ack 368899248, win 23407, options [nop,nop,TS val 864457007 ecr 3498315976], length 0
11:14:06.023834 IP6 C > S: Flags [.], ack 368907468, win 23415, options [nop,nop,TS val 864457007 ecr 3498315976], length 0
11:14:06.023835 IP6 C > S: Flags [.], ack 368915688, win 23392, options [nop,nop,TS val 864457007 ecr 3498315976], length 0
<gap : Not tsval is still 864457007 >
11:14:06.062751 IP6 C > S: Flags [.], ack 368923908, win 23415, options [nop,nop,TS val 864457007 ecr 3498315977], length 0
11:14:06.062767 IP6 C > S: Flags [.], ack 368929620, win 23400, options [nop,nop,TS val 864457007 ecr 3498315977], length 0
11:14:06.062770 IP6 C > S: Flags [.], ack 368932128, win 23430, options [nop,nop,TS val 864457007 ecr 3498315977], length 0
11:14:06.062771 IP6 C > S: Flags [.], ack 368943204, win 23400, options [nop,nop,TS val 864457007 ecr 3498315977], length 0
11:14:06.062773 IP6 C > S: Flags [.], ack 368948568, win 23422, options [nop,nop,TS val 864457008 ecr 3498315977], length 0
11:14:06.062775 IP6 C > S: Flags [.], ack 368956788, win 23400, options [nop,nop,TS val 864457008 ecr 3498315977], length 0
11:14:06.062776 IP6 C > S: Flags [.], ack 368965008, win 23415, options [nop,nop,TS val 864457008 ecr 3498315977], length 0
11:14:06.062778 IP6 C > S: Flags [.], ack 368976084, win 23385, options [nop,nop,TS val 864457008 ecr 3498315977], length 0
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 23:15 ` Eric Dumazet
@ 2016-12-03 23:24 ` Eric Dumazet
2016-12-04 3:18 ` Neal Cardwell
` (2 more replies)
0 siblings, 3 replies; 37+ messages in thread
From: Eric Dumazet @ 2016-12-03 23:24 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, 2016-12-03 at 15:15 -0800, Eric Dumazet wrote:
> On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
> > On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> > > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > > would help this precise workload
> > >
> > > https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
> > >
> > > Maybe you can redo the experiment in ~5 years when distro catches up ;)
> >
> > I can always find a backport, assuming the IPMI is still working. But it
> > really wasn't like this earlier :-) Perhaps something changed on the path,
> > unrelated to BBR.
>
> If the tcpdump is taken on receiver, how would you explain these gaps ?
> Like the NIC was frozen for 40 ms !
Wait a minute. If you use fq on the receiver, then maybe your old debian
kernel did not backport :
https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 23:24 ` Eric Dumazet
@ 2016-12-04 3:18 ` Neal Cardwell
2016-12-04 8:44 ` Steinar H. Gunderson
2016-12-06 17:20 ` Steinar H. Gunderson
2 siblings, 0 replies; 37+ messages in thread
From: Neal Cardwell @ 2016-12-04 3:18 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Steinar H. Gunderson, Jonathan Morton, aqm, bloat
[-- Attachment #1: Type: text/plain, Size: 1590 bytes --]
> http://storage.sesse.net/bbr.pcap -- ssh+tar+gnupg
I agree with Eric that for the ssh+tar+gnupg case the ACK stream seems
like the culprit here. After about 1 second, the ACKs are suddenly
very stretched and very delayed (often more than 100ms). See the
attached screen shots.
I like Eric's theory that the ACKs might be going through fq.
Particularly since the uplink data starts having issues around the
same time as the ACKs for the downlink data.
neal
On Sat, Dec 3, 2016 at 6:24 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Sat, 2016-12-03 at 15:15 -0800, Eric Dumazet wrote:
>> On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
>> > On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
>> > > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
>> > > would help this precise workload
>> > >
>> > > https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
>> > >
>> > > Maybe you can redo the experiment in ~5 years when distro catches up ;)
>> >
>> > I can always find a backport, assuming the IPMI is still working. But it
>> > really wasn't like this earlier :-) Perhaps something changed on the path,
>> > unrelated to BBR.
>>
>> If the tcpdump is taken on receiver, how would you explain these gaps ?
>> Like the NIC was frozen for 40 ms !
>
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
>
>
>
[-- Attachment #2: bbr-hitting-ack-issue-port-4298-downlink.png --]
[-- Type: image/png, Size: 43540 bytes --]
[-- Attachment #3: bbr-hitting-ack-issue-port-4298-uplink.png --]
[-- Type: image/png, Size: 44206 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 23:24 ` Eric Dumazet
2016-12-04 3:18 ` Neal Cardwell
@ 2016-12-04 8:44 ` Steinar H. Gunderson
2016-12-04 17:13 ` Eric Dumazet
2016-12-06 17:20 ` Steinar H. Gunderson
2 siblings, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-04 8:44 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
I checked, and it does not seem that the patch is in the backport, indeed.
I suppose I can try just turning off fq on the receiver?
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-04 8:44 ` Steinar H. Gunderson
@ 2016-12-04 17:13 ` Eric Dumazet
2016-12-04 17:38 ` Steinar H. Gunderson
0 siblings, 1 reply; 37+ messages in thread
From: Eric Dumazet @ 2016-12-04 17:13 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sun, 2016-12-04 at 09:44 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> > Wait a minute. If you use fq on the receiver, then maybe your old debian
> > kernel did not backport :
> >
> > https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
>
> I checked, and it does not seem that the patch is in the backport, indeed.
> I suppose I can try just turning off fq on the receiver?
You could turn off pacing , and keep fq.
tc qdisc change dev eth0 root fq nopacing
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-04 17:13 ` Eric Dumazet
@ 2016-12-04 17:38 ` Steinar H. Gunderson
0 siblings, 0 replies; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-04 17:38 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sun, Dec 04, 2016 at 09:13:19AM -0800, Eric Dumazet wrote:
> You could turn off pacing , and keep fq.
>
> tc qdisc change dev eth0 root fq nopacing
I don't really care about fair queueing except for pacing :-) But I'll try
upgrading the kernel at some point. The results in turning off fq were
inconclusive, especially since this seems to vary a bit with network
conditions during the day.
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 23:24 ` Eric Dumazet
2016-12-04 3:18 ` Neal Cardwell
2016-12-04 8:44 ` Steinar H. Gunderson
@ 2016-12-06 17:20 ` Steinar H. Gunderson
2016-12-06 21:31 ` Neal Cardwell
2 siblings, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-06 17:20 UTC (permalink / raw)
To: Eric Dumazet; +Cc: Jonathan Morton, Neal Cardwell, aqm, bloat
On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
I upgraded to 4.7.0 (newest backport available). I can get up to ~45 MB/sec,
but it seems to hover more around ~22 MB/sec in this test:
http://storage.sesse.net/bbr-4.7.0.pcap
Still 75+ MB/sec for wget, and still no obvious bottlenecks on the server.
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-06 17:20 ` Steinar H. Gunderson
@ 2016-12-06 21:31 ` Neal Cardwell
0 siblings, 0 replies; 37+ messages in thread
From: Neal Cardwell @ 2016-12-06 21:31 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Eric Dumazet, Jonathan Morton, aqm, bloat
[-- Attachment #1.1: Type: text/plain, Size: 1506 bytes --]
On Tue, Dec 6, 2016 at 12:20 PM, Steinar H. Gunderson <
sgunderson@bigfoot.com> wrote:
> On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> > Wait a minute. If you use fq on the receiver, then maybe your old debian
> > kernel did not backport :
> >
> > https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/
> commit/?id=9878196578286c5ed494778ada01da094377a686
>
> I upgraded to 4.7.0 (newest backport available). I can get up to ~45
> MB/sec,
> but it seems to hover more around ~22 MB/sec in this test:
>
> http://storage.sesse.net/bbr-4.7.0.pcap
Thanks for the report, Steinar. Can you please clarify whether the BBR
behavior you are seeing is a regression vs CUBIC's behavior, or is just
mysterious?
It's hard to tell from a receiver-side trace, but this looks to me like a
send buffer limitation. The RTT looks like about 50ms, and the bandwidth is
a little over 500 Mbps, so the BDP is a little over 3 Mbytes. Looks like
most RTTs have a flight of about 2 MBytes of data, followed by a silence
suggesting perhaps the sender ran out of buffered data to send. (Screen
shot attached.)
What are your net.core.wmem_max and net.ipv4.tcp_wmem settings on the
server sending the data?
What happens if you try a bigger wmem cap, like 16 MBytes:
sysctl -w net.core.wmem_max=16777216 net.ipv4.tcp_wmem='4096 16384
16777216'
If you happen to have access, it would be great to get a sender-side
tcpdump trace for both BBR and CUBIC.
Thanks for all your test reports!
cheers,
neal
[-- Attachment #1.2: Type: text/html, Size: 2450 bytes --]
[-- Attachment #2: bbr-2016-12-06-port-57272.png --]
[-- Type: image/png, Size: 43369 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-03 19:13 ` Steinar H. Gunderson
2016-12-03 20:20 ` Eric Dumazet
@ 2016-12-07 16:28 ` Alan Jenkins
2016-12-07 16:47 ` Steinar H. Gunderson
1 sibling, 1 reply; 37+ messages in thread
From: Alan Jenkins @ 2016-12-07 16:28 UTC (permalink / raw)
To: Steinar H. Gunderson, Neal Cardwell; +Cc: Jonathan Morton, aqm, bloat
[-- Attachment #1: Type: text/plain, Size: 1815 bytes --]
On 03/12/16 19:13, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
>>> I have one thing that I _wonder_ if could be BBR's fault: I run
>>> backup over SSH. (That would be tar + gzip + ssh.) The first full
>>> backup after I rolled out BBR on the server (the one sending the
>>> data) suddenly was very slow (~50 Mbit/sec); there was plenty of
>>> free I/O, and neither tar nor gzip (well, pigz) used a full core.
>>> My only remaining explanation would be that somehow, BBR didn't
>>> deal well with the irregular stream of data coming from tar. (A
>>> wget between the same machines at the same time gave 6-700
>>> Mbit/sec.)
>> Thanks for the report, Steinar. This is the first report we've had
>> like this, but it would be interesting to find out what's going
>> on.
>>
>> Even if you don't have time to apply the patches Eric mentions, it
>> would be hugely useful if the next time you have a slow transfer
>> like that you could post a link to a tcpdump packet capture
>> (headers only is best, say -s 120). Ideally the trace would
>> capture a whole connection, so we can see the wscale on the SYN
>> exchange.
>
> I tried reproducing it now. I can't get as far down as 50 Mbit/sec,
> but it stopped around 100 Mbit/sec, still without any clear
> bottlenecks. cubic was just as bad, though.
>
> I've taken two tcpdumps as requested; I can't reboot this server
> easily right now, unfortunately. They are:
>
> http://storage.sesse.net/bbr.pcap -- ssh+tar+gnupg
> http://storage.sesse.net/bbr2.pcap -- wget between same hosts
>
> /* Steinar */
Since no-one's explicitly mentioned this: be aware that SSH is known for
doing application-level windowing, limiting performance.
E.g. see https://www.psc.edu/index.php/hpn-ssh/638
[-- Attachment #2: Type: text/html, Size: 2749 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-07 16:28 ` Alan Jenkins
@ 2016-12-07 16:47 ` Steinar H. Gunderson
2016-12-07 17:03 ` Alan Jenkins
0 siblings, 1 reply; 37+ messages in thread
From: Steinar H. Gunderson @ 2016-12-07 16:47 UTC (permalink / raw)
To: Alan Jenkins; +Cc: Neal Cardwell, Jonathan Morton, aqm, bloat
On Wed, Dec 07, 2016 at 04:28:15PM +0000, Alan Jenkins wrote:
> Since no-one's explicitly mentioned this: be aware that SSH is known for
> doing application-level windowing, limiting performance.
>
> E.g. see https://www.psc.edu/index.php/hpn-ssh/638
Hm, I thought this was mainly about scp, not ssh?
But yes, hpn-ssh is a sad story; maintenance has been very up and down over
the years, and there's no end in sight for an upstream merge.
/* Steinar */
--
Homepage: https://www.sesse.net/
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-07 16:47 ` Steinar H. Gunderson
@ 2016-12-07 17:03 ` Alan Jenkins
0 siblings, 0 replies; 37+ messages in thread
From: Alan Jenkins @ 2016-12-07 17:03 UTC (permalink / raw)
To: Steinar H. Gunderson; +Cc: Neal Cardwell, Jonathan Morton, aqm, bloat
On 07/12/2016, Steinar H. Gunderson <sgunderson@bigfoot.com> wrote:
> On Wed, Dec 07, 2016 at 04:28:15PM +0000, Alan Jenkins wrote:
>> Since no-one's explicitly mentioned this: be aware that SSH is known for
>> doing application-level windowing, limiting performance.
>>
>> E.g. see https://www.psc.edu/index.php/hpn-ssh/638
>
> Hm, I thought this was mainly about scp, not ssh?
>
> But yes, hpn-ssh is a sad story; maintenance has been very up and down over
> the years, and there's no end in sight for an upstream merge.
>
> /* Steinar */
> --
> Homepage: https://www.sesse.net/
Sorry, I meant to check the dates as well. I see now this is old, I
don't know myself whether it is still relevant or not.
I'm sure there was a window at the SSH level. It looks like SFTP had
a window on top of that
http://www.chiark.greenend.org.uk/~sgtatham/putty/wishlist/flow-control.html
I'm not certain if the same applies to the use of SCP, and if so which
layer(s) had the biggest problem.
Alan
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-02 15:52 [Bloat] TCP BBR paper is now generally available Dave Taht
2016-12-02 19:15 ` Aaron Wood
@ 2016-12-08 8:24 ` Mikael Abrahamsson
2016-12-08 13:22 ` Dave Täht
1 sibling, 1 reply; 37+ messages in thread
From: Mikael Abrahamsson @ 2016-12-08 8:24 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
On Fri, 2 Dec 2016, Dave Taht wrote:
> http://queue.acm.org/detail.cfm?id=3022184
"BBR converges toward a fair share of the bottleneck bandwidth whether
competing with other BBR flows or with loss-based congestion control."
That's not what I took away from your tests of having BBR and Cubic flows
together, where BBR just killed Cubic dead.
What has changed since? Have you re-done your tests with whatever has
changed, I must have missed that? Or did I misunderstand?
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-08 8:24 ` Mikael Abrahamsson
@ 2016-12-08 13:22 ` Dave Täht
2016-12-08 14:01 ` Mikael Abrahamsson
0 siblings, 1 reply; 37+ messages in thread
From: Dave Täht @ 2016-12-08 13:22 UTC (permalink / raw)
To: bloat
drop tail works better than any single queue aqm in this scenario.
On 12/8/16 12:24 AM, Mikael Abrahamsson wrote:
> On Fri, 2 Dec 2016, Dave Taht wrote:
>
>> http://queue.acm.org/detail.cfm?id=3022184
>
> "BBR converges toward a fair share of the bottleneck bandwidth whether
> competing with other BBR flows or with loss-based congestion control."
>
> That's not what I took away from your tests of having BBR and Cubic
> flows together, where BBR just killed Cubic dead.
>
> What has changed since? Have you re-done your tests with whatever has
> changed, I must have missed that? Or did I misunderstand?
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-08 13:22 ` Dave Täht
@ 2016-12-08 14:01 ` Mikael Abrahamsson
2016-12-08 21:29 ` Neal Cardwell
0 siblings, 1 reply; 37+ messages in thread
From: Mikael Abrahamsson @ 2016-12-08 14:01 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 390 bytes --]
On Thu, 8 Dec 2016, Dave Täht wrote:
> drop tail works better than any single queue aqm in this scenario.
*confused*
I see nothing in the BBR paper about how it interoperates with other
TCP algorithms. Your text above didn't help me at all.
How is BBR going to be deployed? Is nobody interested how it behaves in a
mixed environment?
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-08 14:01 ` Mikael Abrahamsson
@ 2016-12-08 21:29 ` Neal Cardwell
2016-12-08 22:31 ` Yuchung Cheng
0 siblings, 1 reply; 37+ messages in thread
From: Neal Cardwell @ 2016-12-08 21:29 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Dave Täht, bloat
[-- Attachment #1: Type: text/plain, Size: 3637 bytes --]
Hi Mikael,
Thanks for your questions. Yes, we do care about how BBR behaves in mixed
environments, and particularly in mixed environments with Reno and CUBIC.
And we are actively working in this and related areas.
For the ACM Queue article we faced very hard and tight word count
constraints, so unfortunately we were not able to go into as much detail as
we wanted for the "Competition with Loss-Based Congestion Control" section.
In our recent talk at the ICCRG session at IETF 97 we were able to go into
more detail on the question of sharing paths with loss-based CC like Reno
and CUBIC (in particular please see slides 22-25):
https://www.ietf.org/proceedings/97/slides/slides-97-iccrg-bbr-congestion-control-02.pdf
There is also a video; the BBR section starts around 57:35:
https://www.youtube.com/watch?v=qjWTULVbiVc
In summary, with the initial BBR release:
o BBR and CUBIC end up with roughly equal shares when there is around 1-2x
BDP of FIFO buffer.
o When a FIFO buffer is deeper than that, as everyone on this list well
knows, CUBIC/Reno will dump excessive packets in the queue; in such
bufferbloated cases BBR will get a slightly lower share of throughput than
CUBIC (see slide 23-24). I say "slightly" because BBR's throughput drops
off only very gradually, as you can see. And that's because of the dynamic
in the passage from the ACM Queue paper you cited: "Even as loss-based
congestion control fills the available buffer, ProbeBW still robustly moves
the BtlBw estimate toward the flow's fair share, and ProbeRTT finds an
RTProp estimate just high enough for tit-for-tat convergence to a fair
share." (I guess that last "to" should probably have been "toward".)
o When a buffer is shallower than 1-2x BDP, or has an AQM targeting less
than 1-2 BDP of queue, then BBR will tend to end up with a higher share of
bandwidth than CUBIC or Reno (I think the tests you were referencing fall
into that category). Sometimes that is because the buffer is so shallow
that the multiplicative backoff of CUBIC/Reno cause the bottleneck to be
underutilized; in such cases then BBR is merely using underutilized
bandwidth, and its higher share is a good thing. In more moderately sized
buffers in the 0-2x BDP range (or AQM-managed buffers), our active work
under way right now (see slide 22) should improve things, based on our
experiments in the lab and on YouTube. Basically the two approaches we are
currently experimenting with are (1) explicitly trying to more fully drain
the queue more often, to try to get much closer to inflight==BDP each gain
cycle, and (2) estimate the buffer available to our flow and and modulate
the probing magnitude/frequency.
In summary, our #1 priority for BBR right now is reducing queue pressure,
in order to reduce delay and packet loss, and improve fairness when sharing
paths with loss-based congestion control like CUBIC/Reno.
cheers,
neal
On Thu, Dec 8, 2016 at 9:01 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 8 Dec 2016, Dave Täht wrote:
>
> drop tail works better than any single queue aqm in this scenario.
>>
>
> *confused*
>
> I see nothing in the BBR paper about how it interoperates with other TCP
> algorithms. Your text above didn't help me at all.
>
> How is BBR going to be deployed? Is nobody interested how it behaves in a
> mixed environment?
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 4832 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-08 21:29 ` Neal Cardwell
@ 2016-12-08 22:31 ` Yuchung Cheng
2016-12-09 14:52 ` Klatsky, Carl
0 siblings, 1 reply; 37+ messages in thread
From: Yuchung Cheng @ 2016-12-08 22:31 UTC (permalink / raw)
To: Neal Cardwell, Mikael Abrahamsson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4398 bytes --]
Also we are aware docsis pie is going to be deployed and we'll specifically
test that scenario. With fq this issue is a lot smaller but we understand
it is not preferred setting in some aqm for other good reasons.
But to set the expectation right, we are not going to make bbr prefectly
flow level fair with cubic or reno. I am happy to argue why that makes no
sense. We do want to avoid starvation of either.
On Thu, Dec 8, 2016, 1:29 PM Neal Cardwell <ncardwell@google.com> wrote:
> Hi Mikael,
>
> Thanks for your questions. Yes, we do care about how BBR behaves in mixed
> environments, and particularly in mixed environments with Reno and CUBIC.
> And we are actively working in this and related areas.
>
> For the ACM Queue article we faced very hard and tight word count
> constraints, so unfortunately we were not able to go into as much detail as
> we wanted for the "Competition with Loss-Based Congestion Control" section.
>
> In our recent talk at the ICCRG session at IETF 97 we were able to go into
> more detail on the question of sharing paths with loss-based CC like Reno
> and CUBIC (in particular please see slides 22-25):
>
>
> https://www.ietf.org/proceedings/97/slides/slides-97-iccrg-bbr-congestion-control-02.pdf
>
> There is also a video; the BBR section starts around 57:35:
> https://www.youtube.com/watch?v=qjWTULVbiVc
>
> In summary, with the initial BBR release:
>
> o BBR and CUBIC end up with roughly equal shares when there is around 1-2x
> BDP of FIFO buffer.
>
> o When a FIFO buffer is deeper than that, as everyone on this list well
> knows, CUBIC/Reno will dump excessive packets in the queue; in such
> bufferbloated cases BBR will get a slightly lower share of throughput than
> CUBIC (see slide 23-24). I say "slightly" because BBR's throughput drops
> off only very gradually, as you can see. And that's because of the dynamic
> in the passage from the ACM Queue paper you cited: "Even as loss-based
> congestion control fills the available buffer, ProbeBW still robustly moves
> the BtlBw estimate toward the flow's fair share, and ProbeRTT finds an
> RTProp estimate just high enough for tit-for-tat convergence to a fair
> share." (I guess that last "to" should probably have been "toward".)
>
> o When a buffer is shallower than 1-2x BDP, or has an AQM targeting less
> than 1-2 BDP of queue, then BBR will tend to end up with a higher share of
> bandwidth than CUBIC or Reno (I think the tests you were referencing fall
> into that category). Sometimes that is because the buffer is so shallow
> that the multiplicative backoff of CUBIC/Reno cause the bottleneck to be
> underutilized; in such cases then BBR is merely using underutilized
> bandwidth, and its higher share is a good thing. In more moderately sized
> buffers in the 0-2x BDP range (or AQM-managed buffers), our active work
> under way right now (see slide 22) should improve things, based on our
> experiments in the lab and on YouTube. Basically the two approaches we are
> currently experimenting with are (1) explicitly trying to more fully drain
> the queue more often, to try to get much closer to inflight==BDP each gain
> cycle, and (2) estimate the buffer available to our flow and and modulate
> the probing magnitude/frequency.
>
> In summary, our #1 priority for BBR right now is reducing queue pressure,
> in order to reduce delay and packet loss, and improve fairness when sharing
> paths with loss-based congestion control like CUBIC/Reno.
>
> cheers,
> neal
>
>
>
> On Thu, Dec 8, 2016 at 9:01 AM, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
>
> On Thu, 8 Dec 2016, Dave Täht wrote:
>
> drop tail works better than any single queue aqm in this scenario.
>
>
> *confused*
>
> I see nothing in the BBR paper about how it interoperates with other TCP
> algorithms. Your text above didn't help me at all.
>
> How is BBR going to be deployed? Is nobody interested how it behaves in a
> mixed environment?
>
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 7395 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [Bloat] TCP BBR paper is now generally available
2016-12-08 22:31 ` Yuchung Cheng
@ 2016-12-09 14:52 ` Klatsky, Carl
0 siblings, 0 replies; 37+ messages in thread
From: Klatsky, Carl @ 2016-12-09 14:52 UTC (permalink / raw)
To: Yuchung Cheng, Neal Cardwell, Mikael Abrahamsson; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 5238 bytes --]
Yuchung,
In regards to DOCSIS PIE as part of upcoming DOCSIS 3.1 equipment, single queue PIE will first be deployed on the D3.1 cable modem governing the upstream direction. So the test of BBR & other CCs in the mixed environment would be run with the BBR & other CC sending sources on a server/compute node behind the cable modem sending data upstream to some receiver. I do not see DOCSIS PIE on the CMTS in the downstream direction in the near term.
Regards,
Carl Klatsky
From: Bloat [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Yuchung Cheng
Sent: Thursday, December 08, 2016 5:31 PM
To: Neal Cardwell <ncardwell@google.com<mailto:ncardwell@google.com>>; Mikael Abrahamsson <swmike@swm.pp.se<mailto:swmike@swm.pp.se>>
Cc: bloat <bloat@lists.bufferbloat.net<mailto:bloat@lists.bufferbloat.net>>
Subject: Re: [Bloat] TCP BBR paper is now generally available
Also we are aware docsis pie is going to be deployed and we'll specifically test that scenario. With fq this issue is a lot smaller but we understand it is not preferred setting in some aqm for other good reasons.
But to set the expectation right, we are not going to make bbr prefectly flow level fair with cubic or reno. I am happy to argue why that makes no sense. We do want to avoid starvation of either.
On Thu, Dec 8, 2016, 1:29 PM Neal Cardwell <ncardwell@google.com<mailto:ncardwell@google.com>> wrote:
Hi Mikael,
Thanks for your questions. Yes, we do care about how BBR behaves in mixed environments, and particularly in mixed environments with Reno and CUBIC. And we are actively working in this and related areas.
For the ACM Queue article we faced very hard and tight word count constraints, so unfortunately we were not able to go into as much detail as we wanted for the "Competition with Loss-Based Congestion Control" section.
In our recent talk at the ICCRG session at IETF 97 we were able to go into more detail on the question of sharing paths with loss-based CC like Reno and CUBIC (in particular please see slides 22-25):
https://www.ietf.org/proceedings/97/slides/slides-97-iccrg-bbr-congestion-control-02.pdf
There is also a video; the BBR section starts around 57:35:
https://www.youtube.com/watch?v=qjWTULVbiVc
In summary, with the initial BBR release:
o BBR and CUBIC end up with roughly equal shares when there is around 1-2x BDP of FIFO buffer.
o When a FIFO buffer is deeper than that, as everyone on this list well knows, CUBIC/Reno will dump excessive packets in the queue; in such bufferbloated cases BBR will get a slightly lower share of throughput than CUBIC (see slide 23-24). I say "slightly" because BBR's throughput drops off only very gradually, as you can see. And that's because of the dynamic in the passage from the ACM Queue paper you cited: "Even as loss-based congestion control fills the available buffer, ProbeBW still robustly moves the BtlBw estimate toward the flow's fair share, and ProbeRTT finds an RTProp estimate just high enough for tit-for-tat convergence to a fair share." (I guess that last "to" should probably have been "toward".)
o When a buffer is shallower than 1-2x BDP, or has an AQM targeting less than 1-2 BDP of queue, then BBR will tend to end up with a higher share of bandwidth than CUBIC or Reno (I think the tests you were referencing fall into that category). Sometimes that is because the buffer is so shallow that the multiplicative backoff of CUBIC/Reno cause the bottleneck to be underutilized; in such cases then BBR is merely using underutilized bandwidth, and its higher share is a good thing. In more moderately sized buffers in the 0-2x BDP range (or AQM-managed buffers), our active work under way right now (see slide 22) should improve things, based on our experiments in the lab and on YouTube. Basically the two approaches we are currently experimenting with are (1) explicitly trying to more fully drain the queue more often, to try to get much closer to inflight==BDP each gain cycle, and (2) estimate the buffer available to our flow and and modulate the probing magnitude/frequency.
In summary, our #1 priority for BBR right now is reducing queue pressure, in order to reduce delay and packet loss, and improve fairness when sharing paths with loss-based congestion control like CUBIC/Reno.
cheers,
neal
On Thu, Dec 8, 2016 at 9:01 AM, Mikael Abrahamsson <swmike@swm.pp.se<mailto:swmike@swm.pp.se>> wrote:
On Thu, 8 Dec 2016, Dave Täht wrote:
drop tail works better than any single queue aqm in this scenario.
*confused*
I see nothing in the BBR paper about how it interoperates with other TCP algorithms. Your text above didn't help me at all.
How is BBR going to be deployed? Is nobody interested how it behaves in a mixed environment?
--
Mikael Abrahamsson email: swmike@swm.pp.se<mailto:swmike@swm.pp.se>
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 11772 bytes --]
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2016-12-09 14:52 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02 15:52 [Bloat] TCP BBR paper is now generally available Dave Taht
2016-12-02 19:15 ` Aaron Wood
2016-12-02 20:32 ` Jonathan Morton
2016-12-02 22:22 ` Neal Cardwell
2016-12-02 22:40 ` Steinar H. Gunderson
2016-12-02 23:31 ` Eric Dumazet
2016-12-03 13:03 ` Neal Cardwell
2016-12-03 19:13 ` Steinar H. Gunderson
2016-12-03 20:20 ` Eric Dumazet
2016-12-03 20:26 ` Jonathan Morton
2016-12-03 21:07 ` Eric Dumazet
2016-12-03 21:34 ` Steinar H. Gunderson
2016-12-03 21:50 ` Eric Dumazet
2016-12-03 22:13 ` Steinar H. Gunderson
2016-12-03 22:55 ` Eric Dumazet
2016-12-03 23:02 ` Eric Dumazet
2016-12-03 23:09 ` Eric Dumazet
2016-12-03 23:03 ` Steinar H. Gunderson
2016-12-03 23:15 ` Eric Dumazet
2016-12-03 23:24 ` Eric Dumazet
2016-12-04 3:18 ` Neal Cardwell
2016-12-04 8:44 ` Steinar H. Gunderson
2016-12-04 17:13 ` Eric Dumazet
2016-12-04 17:38 ` Steinar H. Gunderson
2016-12-06 17:20 ` Steinar H. Gunderson
2016-12-06 21:31 ` Neal Cardwell
2016-12-03 21:38 ` Jonathan Morton
2016-12-03 21:33 ` Steinar H. Gunderson
2016-12-07 16:28 ` Alan Jenkins
2016-12-07 16:47 ` Steinar H. Gunderson
2016-12-07 17:03 ` Alan Jenkins
2016-12-08 8:24 ` Mikael Abrahamsson
2016-12-08 13:22 ` Dave Täht
2016-12-08 14:01 ` Mikael Abrahamsson
2016-12-08 21:29 ` Neal Cardwell
2016-12-08 22:31 ` Yuchung Cheng
2016-12-09 14:52 ` Klatsky, Carl
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox