revolutions per minute - a new metric for measuring responsiveness
 help / color / mirror / Atom feed
* [Rpm] *under*bloated networks
@ 2021-11-23  3:35 Dave Taht
  2021-11-28 15:42 ` [Rpm] [Bloat] " Neal Cardwell
  0 siblings, 1 reply; 3+ messages in thread
From: Dave Taht @ 2021-11-23  3:35 UTC (permalink / raw)
  To: bloat, Rpm

In the last two weeks I have found two dramatically underbuffered Gbit
fiber networks.

This one appears to have about a 400 full size packet uplink buffer (5ms)[1]

https://imgur.com/a/Bm9hdNf

It was pretty remarkable to see how well multiple tcp flows still
achieved close to the full rate with such a small fixed size queue,
eventually.

A single bbr flow can't crack 150mbits: https://imgur.com/a/DpydL5K.

[1] Data courtesy testing "eethaw"'s eero 6's new hardware fq_codel
offload, which is, seemingly good to a gbit at least on one direction.

https://www.reddit.com/r/eero/comments/qxbkcl/66_is_out/


-- 
I tried to build a better future, a few times:
https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org

Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Rpm] [Bloat] *under*bloated networks
  2021-11-23  3:35 [Rpm] *under*bloated networks Dave Taht
@ 2021-11-28 15:42 ` Neal Cardwell
  2021-11-28 18:24   ` Dave Taht
  0 siblings, 1 reply; 3+ messages in thread
From: Neal Cardwell @ 2021-11-28 15:42 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat, Rpm


[-- Attachment #1.1: Type: text/plain, Size: 1222 bytes --]

On Mon, Nov 22, 2021 at 10:35 PM Dave Taht <dave.taht@gmail.com> wrote:

> In the last two weeks I have found two dramatically underbuffered Gbit
> fiber networks.
>
> This one appears to have about a 400 full size packet uplink buffer
> (5ms)[1]
>
> https://imgur.com/a/Bm9hdNf
>
> It was pretty remarkable to see how well multiple tcp flows still
> achieved close to the full rate with such a small fixed size queue,
> eventually.
>
> A single bbr flow can't crack 150mbits: https://imgur.com/a/DpydL5K.
>

Thanks, Dave. The single-flow BBR upload case is interesting.

I took a look at the packet traces in the later thread:
  https://www.reddit.com/r/eero/comments/qxbkcl/66_is_out/hltlep0/

For the single-flow BBR case it seems that....

(1) the BBR(v1) flow is running into a 300 Mbps bottleneck rate (visible in
the slope of the green ACK line in the zoomed-in trace, attached).

(2) the BBR(v1) flow is achieving an average rate a bit above 150 Mbps
because it repeatedly runs into receive window limits (the yellow line in
the zoomed-out trace, attached). The frequent receive window limits mean
that the flow spends a lot of time unable to send anything, thus leading to
lower average throughput.

cheers,
neal

[-- Attachment #1.2: Type: text/html, Size: 1976 bytes --]

[-- Attachment #2: bloat-2021-11-22-underbloated-bbrv1-168Mbps-zoomed-in.png --]
[-- Type: image/png, Size: 113905 bytes --]

[-- Attachment #3: bloat-2021-11-22-underbloated-bbrv1-168Mbps-zoomed-out.png --]
[-- Type: image/png, Size: 135848 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Rpm] [Bloat] *under*bloated networks
  2021-11-28 15:42 ` [Rpm] [Bloat] " Neal Cardwell
@ 2021-11-28 18:24   ` Dave Taht
  0 siblings, 0 replies; 3+ messages in thread
From: Dave Taht @ 2021-11-28 18:24 UTC (permalink / raw)
  To: Neal Cardwell; +Cc: bloat, Rpm

On Sun, Nov 28, 2021 at 7:42 AM Neal Cardwell <ncardwell@google.com> wrote:
>
>
>
> On Mon, Nov 22, 2021 at 10:35 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> In the last two weeks I have found two dramatically underbuffered Gbit
>> fiber networks.
>>
>> This one appears to have about a 400 full size packet uplink buffer (5ms)[1]
>>
>> https://imgur.com/a/Bm9hdNf
>>
>> It was pretty remarkable to see how well multiple tcp flows still
>> achieved close to the full rate with such a small fixed size queue,
>> eventually.

And baseline rtts not increasing more than 5ms before seeing a drop with cubic.

https://imgur.com/a/S5v2EF3

>>
>> A single bbr flow can't crack 150mbits: https://imgur.com/a/DpydL5K.
>
>
> Thanks, Dave. The single-flow BBR upload case is interesting.
>
> I took a look at the packet traces in the later thread:
>   https://www.reddit.com/r/eero/comments/qxbkcl/66_is_out/hltlep0/
>
> For the single-flow BBR case it seems that....
>
> (1) the BBR(v1) flow is running into a 300 Mbps bottleneck rate (visible in the slope of the green ACK line in the zoomed-in trace, attached).

This particular bottleneck is capable of a gbit, according to the 8
flow tests. A single cubic flow peaks at 300Mbit,
in earlier tests.

> (2) the BBR(v1) flow is achieving an average rate a bit above 150 Mbps because it repeatedly runs into receive window limits (the yellow line in the zoomed-out trace, attached). The frequent receive window limits mean that the flow spends a lot of time unable to send anything, thus leading to lower average throughput.

In all honesty, I haven't looked at BBR, + long rtts, and rates >
100Mbit often enough. (I actually have never had more than a 35Mbit
uplink to the internet in my whole life.)  What I see here is a big
sack block, and that pause, and yes, the window getting smaller... And
universally, this fellah's link kicking flows out of slow start at
around 100Mbit... but what does it mean?

this server is running 5.11.0-40-generic #44-Ubuntu SMP, if there is
something I can tune.

Most speedtest sites use 8+ flows on the upload portion of the test...
>
> cheers,
> neal
>


-- 
I tried to build a better future, a few times:
https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org

Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-11-28 18:24 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-23  3:35 [Rpm] *under*bloated networks Dave Taht
2021-11-28 15:42 ` [Rpm] [Bloat] " Neal Cardwell
2021-11-28 18:24   ` Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox