* [Cake] the grinch meets cloudflare's christmas present
@ 2023-01-04 17:26 Dave Taht
2023-01-04 19:20 ` [Cake] [Rpm] " jf
2023-01-06 16:38 ` [Cake] [LibreQoS] " MORTON JR., AL
0 siblings, 2 replies; 26+ messages in thread
From: Dave Taht @ 2023-01-04 17:26 UTC (permalink / raw)
To: bloat, libreqos, Cake List, Dave Taht via Starlink, Rpm, IETF IPPM WG
[-- Attachment #1: Type: text/plain, Size: 1383 bytes --]
Please try the new, the shiny, the really wonderful test here:
https://speed.cloudflare.com/
I would really appreciate some independent verification of
measurements using this tool. In my brief experiments it appears - as
all the commercial tools to date - to dramatically understate the
bufferbloat, on my LTE, (and my starlink terminal is out being
hacked^H^H^H^H^H^Hworked on, so I can't measure that)
My test of their test reports 223ms 5G latency under load , where
flent reports over 2seconds. See comparison attached.
My guess is that this otherwise lovely new tool, like too many,
doesn't run for long enough. Admittedly, most web objects (their
target market) are small, and so long as they remain small and not
heavily pipelined this test is a very good start... but I'm pretty
sure cloudflare is used for bigger uploads and downloads than that.
There's no way to change the test to run longer either.
I'd love to get some results from other networks (compared as usual to
flent), especially ones with cake on it. I'd love to know if they
measured more minimum rtts that can be obtained with fq_codel or cake,
correctly.
Love Always,
The Grinch
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 256990 bytes --]
[-- Attachment #3: tcp_nup-2023-01-04T090937.211620.LTE.flent.gz --]
[-- Type: application/gzip, Size: 25192 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 17:26 [Cake] the grinch meets cloudflare's christmas present Dave Taht
@ 2023-01-04 19:20 ` jf
2023-01-04 20:02 ` rjmcmahon
2023-01-05 4:25 ` [Cake] [Starlink] [Rpm] the grinch meets cloudflare's christmas present Dick Roy
2023-01-06 16:38 ` [Cake] [LibreQoS] " MORTON JR., AL
1 sibling, 2 replies; 26+ messages in thread
From: jf @ 2023-01-04 19:20 UTC (permalink / raw)
To: Dave Taht
Cc: bloat, libreqos, Cake List, Dave Taht via Starlink, Rpm, IETF IPPM WG
[-- Attachment #1: Type: text/plain, Size: 489 bytes --]
HNY Dave and all the rest,
Great to see yet another capacity test add latency metrics to the results. This one looks like a good start.
Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver reliable low-latency unless you shave a good bit off the targets. My local loop is pretty congested.
Here’s the latest Cloudflare test:
[-- Attachment #2: CFSpeedTest_Gig35_20230104.png --]
[-- Type: image/png, Size: 379539 bytes --]
[-- Attachment #3: Type: text/plain, Size: 41 bytes --]
And an Ookla test run just afterward:
[-- Attachment #4: Speedtest_net_Gig35_20230104.png --]
[-- Type: image/png, Size: 40589 bytes --]
[-- Attachment #5: Type: text/plain, Size: 1897 bytes --]
They are definitely both in the ballpark and correspond to other tests run from the router itself or my (wired) MacBook Pro.
Cheers,
Jonathan
> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> Please try the new, the shiny, the really wonderful test here:
> https://speed.cloudflare.com/
>
> I would really appreciate some independent verification of
> measurements using this tool. In my brief experiments it appears - as
> all the commercial tools to date - to dramatically understate the
> bufferbloat, on my LTE, (and my starlink terminal is out being
> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
> My test of their test reports 223ms 5G latency under load , where
> flent reports over 2seconds. See comparison attached.
>
> My guess is that this otherwise lovely new tool, like too many,
> doesn't run for long enough. Admittedly, most web objects (their
> target market) are small, and so long as they remain small and not
> heavily pipelined this test is a very good start... but I'm pretty
> sure cloudflare is used for bigger uploads and downloads than that.
> There's no way to change the test to run longer either.
>
> I'd love to get some results from other networks (compared as usual to
> flent), especially ones with cake on it. I'd love to know if they
> measured more minimum rtts that can be obtained with fq_codel or cake,
> correctly.
>
> Love Always,
> The Grinch
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 19:20 ` [Cake] [Rpm] " jf
@ 2023-01-04 20:02 ` rjmcmahon
2023-01-05 11:11 ` [Cake] [Bloat] " Sebastian Moeller
2023-01-05 4:25 ` [Cake] [Starlink] [Rpm] the grinch meets cloudflare's christmas present Dick Roy
1 sibling, 1 reply; 26+ messages in thread
From: rjmcmahon @ 2023-01-04 20:02 UTC (permalink / raw)
To: jf
Cc: Dave Taht, Dave Taht via Starlink, IETF IPPM WG, libreqos,
Cake List, Rpm, bloat
Curious to why people keep calling capacity tests speed tests? A semi at
55 mph isn't faster than a porsche at 141 mph because its load volume is
larger.
Bob
> HNY Dave and all the rest,
>
> Great to see yet another capacity test add latency metrics to the
> results. This one looks like a good start.
>
> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> reliable low-latency unless you shave a good bit off the targets. My
> local loop is pretty congested.
>
> Here’s the latest Cloudflare test:
>
>
>
>
> And an Ookla test run just afterward:
>
>
>
>
> They are definitely both in the ballpark and correspond to other tests
> run from the router itself or my (wired) MacBook Pro.
>
> Cheers,
>
> Jonathan
>
>
>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>>
>> Please try the new, the shiny, the really wonderful test here:
>> https://speed.cloudflare.com/
>>
>> I would really appreciate some independent verification of
>> measurements using this tool. In my brief experiments it appears - as
>> all the commercial tools to date - to dramatically understate the
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> My test of their test reports 223ms 5G latency under load , where
>> flent reports over 2seconds. See comparison attached.
>>
>> My guess is that this otherwise lovely new tool, like too many,
>> doesn't run for long enough. Admittedly, most web objects (their
>> target market) are small, and so long as they remain small and not
>> heavily pipelined this test is a very good start... but I'm pretty
>> sure cloudflare is used for bigger uploads and downloads than that.
>> There's no way to change the test to run longer either.
>>
>> I'd love to get some results from other networks (compared as usual to
>> flent), especially ones with cake on it. I'd love to know if they
>> measured more minimum rtts that can be obtained with fq_codel or cake,
>> correctly.
>>
>> Love Always,
>> The Grinch
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 19:20 ` [Cake] [Rpm] " jf
2023-01-04 20:02 ` rjmcmahon
@ 2023-01-05 4:25 ` Dick Roy
1 sibling, 0 replies; 26+ messages in thread
From: Dick Roy @ 2023-01-05 4:25 UTC (permalink / raw)
To: jf, 'Dave Taht'
Cc: 'IETF IPPM WG', 'libreqos', 'Cake List',
'Rpm', 'bloat'
[-- Attachment #1: Type: text/plain, Size: 2730 bytes --]
HNY to all!
Seems to me that we often get distracted by nomenclature needlessly.
Perhaps it's time to agree on the lexicon that should be used going forward
so as to avoid such distractions.
Perhaps a place to start is "the technical facts":
1) "capacity" is a property of a link (or links) that specifies the
theoretically achievable maximum error-free transmission rate of
data/information through a noisy channel (or channels, the multidimensiaonl
version of the capacity theorem). Yes, it's much more complicated than that
in general, however the basic principle is easy to understand. "You can only
get so much water through a hose of size X with an applied pressure of
magnitude Y.")
2) "maximum achievable throughput/data-rate" of a channel is the maximum
rate (always <= channel capacity) at which information can be exchanged in
the channel as implemented (under all conditions).
3) achieved/measured "data rate" is the measured/estimated rate of
information transmission (always <= maximum achievable rate" for that
channel) in a channel under a given set of conditions.
4) "latency" is the amount of time it takes information to get from a
source to its destination (there may be multiple destinations each with
different latencies :-)). Latency may (or may not) include the unavoidable
consequence of the laws of physics that state information can not travel
faster than the "speed" of light (actually the "speed" in whatever medium
and by whatever mode the information is actually being transported)! Tin
cans and strings have a transmission speed that depends critically on how
hard each person at the end of the "link" are pulling on their cans! :-) The
point is that when included, information transmission times from source to
destination set a lower bound on the "latency" of that link/channel.
5) . (feel free to add more :-)
My two cents!
RR
-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
jf--- via Starlink
Sent: Wednesday, January 4, 2023 11:20 AM
To: Dave Taht
Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm; bloat
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present
HNY Dave and all the rest,
Great to see yet another capacity test add latency metrics to the results.
This one looks like a good start.
Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0)
Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86)
with Cake set for 710/31 as this ISP can't deliver reliable low-latency
unless you shave a good bit off the targets. My local loop is pretty
congested.
Here's the latest Cloudflare test:
[-- Attachment #2: Type: text/html, Size: 8886 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 20:02 ` rjmcmahon
@ 2023-01-05 11:11 ` Sebastian Moeller
2023-01-06 0:30 ` [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present Dick Roy
0 siblings, 1 reply; 26+ messages in thread
From: Sebastian Moeller @ 2023-01-05 11:11 UTC (permalink / raw)
To: rjmcmahon
Cc: jf, Cake List, IETF IPPM WG, libreqos, Dave Taht via Starlink,
Rpm, bloat
Hi Bob,
> On Jan 4, 2023, at 21:02, rjmcmahon via Bloat <bloat@lists.bufferbloat.net> wrote:
>
> Curious to why people keep calling capacity tests speed tests? A semi at 55 mph isn't faster than a porsche at 141 mph because its load volume is larger.
[SM] I am not sure whether answering the "why" is likely to getting us closer to remedy the situation. IMHO we are unlikely to change that just as we are unlikely to change the equally debatable use of "bandwidth" as synonym for "maximal capacity"... These two ships have sailed no matter how much shouting at clouds is going to happen ;)
My theory about the way is, this is entirely marketing driven, both device manufacturers/ISPs and end-users desire to keep things simple so ideally a single number and a catchy name... "Speed" as in top-speed was already a well known quantity for motor vehicles that consumers as a group had accepted to correlate with price. Now purist will say that "speed" is already well-defined as distance/time and "amount of data" is not a viable distance measure (how many bits are there in a meter?), but since when has marketing and the desire for simply single-number "quality indicators" ever cared much for the complexities of the real world?
Also when remembering the old analog modem and ISDN days, at that time additional capacity truly was my main desirable, so marketing by max capacity was relevant to me independent of how this was called, I would not be amazed if I was not alone with that view. I guess that single measure and the wrong name simply stuck...
Personally I try to use rate instead of speed or bandwidth, but I note that I occasionally fail without even noticing it.
Technically I agree that one way latency is more closely related to "speed" as between any two end-points there is always a path the information travels that has a "true" length, so speed could be defined as network-path-length/OWD, but that would only be the average speed over the path... I am not sure how informative or marketable this wuld be for end-users though ;)
Regards
Sebastian
>
> Bob
>> HNY Dave and all the rest,
>> Great to see yet another capacity test add latency metrics to the
>> results. This one looks like a good start.
>> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
>> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
>> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
>> reliable low-latency unless you shave a good bit off the targets. My
>> local loop is pretty congested.
>> Here’s the latest Cloudflare test:
>> And an Ookla test run just afterward:
>> They are definitely both in the ballpark and correspond to other tests
>> run from the router itself or my (wired) MacBook Pro.
>> Cheers,
>> Jonathan
>>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
>>> Please try the new, the shiny, the really wonderful test here:
>>> https://speed.cloudflare.com/
>>> I would really appreciate some independent verification of
>>> measurements using this tool. In my brief experiments it appears - as
>>> all the commercial tools to date - to dramatically understate the
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>> My test of their test reports 223ms 5G latency under load , where
>>> flent reports over 2seconds. See comparison attached.
>>> My guess is that this otherwise lovely new tool, like too many,
>>> doesn't run for long enough. Admittedly, most web objects (their
>>> target market) are small, and so long as they remain small and not
>>> heavily pipelined this test is a very good start... but I'm pretty
>>> sure cloudflare is used for bigger uploads and downloads than that.
>>> There's no way to change the test to run longer either.
>>> I'd love to get some results from other networks (compared as usual to
>>> flent), especially ones with cake on it. I'd love to know if they
>>> measured more minimum rtts that can be obtained with fq_codel or cake,
>>> correctly.
>>> Love Always,
>>> The Grinch
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> Dave Täht CEO, TekLibre, LLC
>>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present
2023-01-05 11:11 ` [Cake] [Bloat] " Sebastian Moeller
@ 2023-01-06 0:30 ` Dick Roy
2023-01-06 2:33 ` rjmcmahon
2023-01-06 9:55 ` Sebastian Moeller
0 siblings, 2 replies; 26+ messages in thread
From: Dick Roy @ 2023-01-06 0:30 UTC (permalink / raw)
To: 'Sebastian Moeller', 'rjmcmahon'
Cc: 'IETF IPPM WG', jf, 'libreqos',
'Cake List', 'Rpm', 'bloat'
[-- Attachment #1: Type: text/plain, Size: 7949 bytes --]
-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Sebastian Moeller via Starlink
Sent: Thursday, January 5, 2023 3:12 AM
To: rjmcmahon
Cc: Dave Taht via Starlink; IETF IPPM WG; jf@jonathanfoulkes.com; libreqos;
Cake List; Rpm; bloat
Subject: Re: [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas
present
Hi Bob,
> On Jan 4, 2023, at 21:02, rjmcmahon via Bloat
<bloat@lists.bufferbloat.net> wrote:
>
> Curious to why people keep calling capacity tests speed tests? A semi at
55 mph isn't faster than a porsche at 141 mph because its load volume is
larger.
[SM] I am not sure whether answering the "why" is likely to getting us
closer to remedy the situation. IMHO we are unlikely to change that just as
we are unlikely to change the equally debatable use of "bandwidth" as
synonym for "maximal capacity"... These two ships have sailed no matter how
much shouting at clouds is going to happen ;)
[RR] I hope that this not true, however I am not doubting your assertion!
:-) The capacity of a channel of bandwidth W (in its simplest form) is
well-known to be:
C = W*log2(1 + P/N)in units of bits/sec
There is no such thing generally as maximal capacity, only capacity as a
function of the parameters of the channel P, N, and W which turns out to be
the maximum error-free (very important!) rate of information transfer
given the power (P) of the transmission and the power (N) of the noise in
that channel of bandwidth W.
My theory about the way is, this is entirely marketing driven, both device
manufacturers/ISPs and end-users desire to keep things simple so ideally a
single number and a catchy name... "Speed" as in top-speed was already a
well known quantity for motor vehicles that consumers as a group had
accepted to correlate with price. Now purist will say that "speed" is
already well-defined as distance/time and "amount of data" is not a viable
distance measure (how many bits are there in a meter?), but since when has
marketing and the desire for simply single-number "quality indicators" ever
cared much for the complexities of the real world?
Also when remembering the old analog modem and ISDN days, at that time
additional capacity truly was my main desirable, so marketing by max
capacity was relevant to me independent of how this was called, I would not
be amazed if I was not alone with that view. I guess that single measure and
the wrong name simply stuck...
[RR] As I recall the old analog modem days, modems were labeled by their
achievable data rates, e.g. this is a 14.4 kbps modem and the notion of
achieving channel capacity was quite well-known in that people actually
realized that at 56 kbps, modems were nearly at the capacity of those
mile-long twisted-pair copper wires to the CO with 3kHz bandwidth low-pass
filters on the end and they could stop trying to build faster ones :-)
Personally I try to use rate instead of speed or bandwidth, but I note that
I occasionally fail without even noticing it.
Technically I agree that one way latency is more closely related to "speed"
as between any two end-points there is always a path the information travels
that has a "true" length, so speed could be defined as
network-path-length/OWD, but that would only be the average speed over the
path... I am not sure how informative or marketable this wuld be for
end-users though ;)
[RR] Again, transit time is only one component of latency, and one that
could be accounted for by simply stating the minimal achievable latency
for any given channel is the transit time of the information. Information
simply can not flow faster than the speed of light in this universe as we
understand it today, so EVERY communication channel has a non-zero transit
time from source to destination. :-) Comparing latency to speed of
transmission is just not useful IMO for just this reason. IMO, a more
useful concept of latency is the excess transit time over the theoretical
minimum that results from all the real-world interruptions in the
transmission path(s) including things like regeneration of optical signals
in long cables, switching of network layer protocols in gateways (header
manipulation above layer 4), and yes, of course, buffering in switches and
routers :-) These are things that can be minimized by appropriate system
design (the topic of these threads actually!). The only way to decrease
transit time is to go wireless everywhere, eliminate our atmosphere, and
then get physically closer to each other! :-) Like it or not, we live in a
Lorentz-ian space-time continuum also know as our world :-)
Cheers,
RR
Regards
Sebastian
>
> Bob
>> HNY Dave and all the rest,
>> Great to see yet another capacity test add latency metrics to the
>> results. This one looks like a good start.
>> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
>> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
>> (an i5 x86) with Cake set for 710/31 as this ISP cant deliver
>> reliable low-latency unless you shave a good bit off the targets. My
>> local loop is pretty congested.
>> Heres the latest Cloudflare test:
>> And an Ookla test run just afterward:
>> They are definitely both in the ballpark and correspond to other tests
>> run from the router itself or my (wired) MacBook Pro.
>> Cheers,
>> Jonathan
>>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
<rpm@lists.bufferbloat.net> wrote:
>>> Please try the new, the shiny, the really wonderful test here:
>>> https://speed.cloudflare.com/
>>> I would really appreciate some independent verification of
>>> measurements using this tool. In my brief experiments it appears - as
>>> all the commercial tools to date - to dramatically understate the
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>> My test of their test reports 223ms 5G latency under load , where
>>> flent reports over 2seconds. See comparison attached.
>>> My guess is that this otherwise lovely new tool, like too many,
>>> doesn't run for long enough. Admittedly, most web objects (their
>>> target market) are small, and so long as they remain small and not
>>> heavily pipelined this test is a very good start... but I'm pretty
>>> sure cloudflare is used for bigger uploads and downloads than that.
>>> There's no way to change the test to run longer either.
>>> I'd love to get some results from other networks (compared as usual to
>>> flent), especially ones with cake on it. I'd love to know if they
>>> measured more minimum rtts that can be obtained with fq_codel or cake,
>>> correctly.
>>> Love Always,
>>> The Grinch
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>>
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698136666560
7352320-FXtz
>>> Dave Täht CEO, TekLibre, LLC
>>>
<image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>__________________
_____________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 23017 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present
2023-01-06 0:30 ` [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present Dick Roy
@ 2023-01-06 2:33 ` rjmcmahon
2023-01-06 9:55 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: rjmcmahon @ 2023-01-06 2:33 UTC (permalink / raw)
To: dickroy
Cc: 'Sebastian Moeller', 'IETF IPPM WG',
jf, 'libreqos', 'Cake List', 'Rpm',
'bloat'
>
> _[RR] ... IMO, a more useful concept of latency is the
> excess transit time over the theoretical minimum that results from all
> the real-world "interruptions" in the transmission path(s) including
> things like regeneration of optical signals in long cables, switching
> of network layer protocols in gateways (header manipulation above
> layer 4), and yes, of course, buffering in switches and routers __J
> These are things that can be "minimized" by appropriate system design
> (the topic of these threads actually!). "
I think this is worth repeating. Thanks for pointing it out. (I'm
wondering if better inline network telemetry can also help forwarding
planes use tech like segment routing to bypass and mitigate any
"temporal interruptions.")
> The only way to decrease transit time is to "go wireless everywhere,
> eliminate our atmosphere,
> and then get physically closer to each other"! __J Like it or not, we
> live in a Lorentz-ian space-time continuum also know as "our world"
This reminds me of the spread networks approach (who then got beat out
by microwave for HFT.)
https://en.wikipedia.org/wiki/Spread_Networks
"According to a WIRED article, the estimated roundtrip time for an
ordinary cable is 14.5 milliseconds, giving users of Spread Networks a
slight advantage. However, because glass has a higher refractive index
than air (about 1.5 compared to about 1), the roundtrip time for fiber
optic cable transmission is 50% more than that for transmission through
the air. Some companies, such as McKay Brothers, Metrorede and
Tradeworx, are using air-based transmission to offer lower estimated
roundtrip times (8.2 milliseconds and 8.5 milliseconds respectively)
that are very close to the theoretical minimum possible (about 7.9-8
milliseconds)."
Bob
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present
2023-01-06 0:30 ` [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present Dick Roy
2023-01-06 2:33 ` rjmcmahon
@ 2023-01-06 9:55 ` Sebastian Moeller
1 sibling, 0 replies; 26+ messages in thread
From: Sebastian Moeller @ 2023-01-06 9:55 UTC (permalink / raw)
To: Dick Roy; +Cc: rjmcmahon, IETF IPPM WG, jf, libreqos, Cake List, Rpm, bloat
Hi RR,
> On Jan 6, 2023, at 01:30, Dick Roy <dickroy@alum.mit.edu> wrote:
>
>
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of Sebastian Moeller via Starlink
> Sent: Thursday, January 5, 2023 3:12 AM
> To: rjmcmahon
> Cc: Dave Taht via Starlink; IETF IPPM WG; jf@jonathanfoulkes.com; libreqos; Cake List; Rpm; bloat
> Subject: Re: [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present
>
> Hi Bob,
>
>
> > On Jan 4, 2023, at 21:02, rjmcmahon via Bloat <bloat@lists.bufferbloat.net> wrote:
> >
> > Curious to why people keep calling capacity tests speed tests? A semi at 55 mph isn't faster than a porsche at 141 mph because its load volume is larger.
>
> [SM] I am not sure whether answering the "why" is likely to getting us closer to remedy the situation. IMHO we are unlikely to change that just as we are unlikely to change the equally debatable use of "bandwidth" as synonym for "maximal capacity"... These two ships have sailed no matter how much shouting at clouds is going to happen ;)
> [RR] I hope that this not true, however I am not doubting your assertion! J
[SM2] Yes, not my preference either way, but it is hard to overcome common usage... langauge sort of growth organically with the occasional illogical side routes.
> The capacity of a channel of bandwidth W (in its simplest form) is well-known to be:
>
> C = W*log2(1 + P/N)in units of bits/sec
>
> There is no such thing generally as “maximal capacity”, only “capacity as a function of the parameters of the channel P, N, and W” which turns out to be the “maximum error-free (very important!) rate of information transfer” given the power (P) of the transmission and the power (N) of the noise in that channel of bandwidth W.
[SM2] Mmmh, I thought that in telecommunications nobody aims for error-free, but only for "acceptable" levels of errors?
> My theory about the way is, this is entirely marketing driven, both device manufacturers/ISPs and end-users desire to keep things simple so ideally a single number and a catchy name... "Speed" as in top-speed was already a well known quantity for motor vehicles that consumers as a group had accepted to correlate with price. Now purist will say that "speed" is already well-defined as distance/time and "amount of data" is not a viable distance measure (how many bits are there in a meter?), but since when has marketing and the desire for simply single-number "quality indicators" ever cared much for the complexities of the real world?
> Also when remembering the old analog modem and ISDN days, at that time additional capacity truly was my main desirable, so marketing by max capacity was relevant to me independent of how this was called, I would not be amazed if I was not alone with that view. I guess that single measure and the wrong name simply stuck...
> [RR] As I recall the old analog modem days, modems were “labeled” by their achievable data rates, e.g. “this is a 14.4 kbps modem” and the notion of achieving channel capacity was quite well-known in that people actually realized that at 56 kbps, modems were nearly at the capacity of those mile-long twisted-pair copper wires to the CO with 3kHz bandwidth low-pass filters on the end and they could stop trying to build faster ones J
[SM] But IIRC 56K modems did not achieve this rate in both directions? So either something like 40/48 or 56/33, so at least in the V.90/V.92 class of 56K modems things already had become murky.
> Personally I try to use rate instead of speed or bandwidth, but I note that I occasionally fail without even noticing it.
>
> Technically I agree that one way latency is more closely related to "speed" as between any two end-points there is always a path the information travels that has a "true" length, so speed could be defined as network-path-length/OWD, but that would only be the average speed over the path... I am not sure how informative or marketable this wuld be for end-users though ;)
> [RR] Again, transit time is only one component of latency, and one that could be accounted for by simply stating the “minimal achievable latency” for any given channel is the transit time of the information. Information simply can not flow faster than the speed of light in this universe as we understand it today, so EVERY communication channel has a non-zero transit time from source to destination. J Comparing latency to “speed of transmission” is just not useful IMO for just this reason. IMO, a more useful concept of latency is the excess transit time over the theoretical minimum that results from all the real-world “interruptions” in the transmission path(s) including things like regeneration of optical signals in long cables, switching of network layer protocols in gateways (header manipulation above layer 4), and yes, of course, buffering in switches and routers J These are things that can be “minimized” by appropriate system design (the topic of these threads actually!). The only way to decrease transit time is to “go wireless everywhere, eliminate our atmosphere, and then get physically closer to each other”! J Like it or not, we live in a Lorentz-ian space-time continuum also know as “our world” J
[SM2] Pragmatically I think splitting delay into static and variable components gets the most bang for the buck, the static delay includes transit time, but also unavoidable queueing delay on the way (signal refresh, or being moved from one interface to another in a router), while the dynamic delay is the one that varies with a number of external factors like own rate, others rates over shared links, variable rate over links like LTE... So far "speedtests" typically only measured "idle" latency which approximates the static delay, while for most applications static delay can be worked around while changes in the variable delay cause problems (if these changes are fast enough, very slow changes can be adapted to just as well as truly static delay).
Regards
Sebastian
>
> Cheers,
>
> RR
>
>
> Regards
> Sebastian
>
>
>
> >
> > Bob
> >> HNY Dave and all the rest,
> >> Great to see yet another capacity test add latency metrics to the
> >> results. This one looks like a good start.
> >> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> >> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> >> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> >> reliable low-latency unless you shave a good bit off the targets. My
> >> local loop is pretty congested.
> >> Here’s the latest Cloudflare test:
> >> And an Ookla test run just afterward:
> >> They are definitely both in the ballpark and correspond to other tests
> >> run from the router itself or my (wired) MacBook Pro.
> >> Cheers,
> >> Jonathan
> >>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
> >>> Please try the new, the shiny, the really wonderful test here:
> >>> https://speed.cloudflare.com/
> >>> I would really appreciate some independent verification of
> >>> measurements using this tool. In my brief experiments it appears - as
> >>> all the commercial tools to date - to dramatically understate the
> >>> bufferbloat, on my LTE, (and my starlink terminal is out being
> >>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> >>> My test of their test reports 223ms 5G latency under load , where
> >>> flent reports over 2seconds. See comparison attached.
> >>> My guess is that this otherwise lovely new tool, like too many,
> >>> doesn't run for long enough. Admittedly, most web objects (their
> >>> target market) are small, and so long as they remain small and not
> >>> heavily pipelined this test is a very good start... but I'm pretty
> >>> sure cloudflare is used for bigger uploads and downloads than that.
> >>> There's no way to change the test to run longer either.
> >>> I'd love to get some results from other networks (compared as usual to
> >>> flent), especially ones with cake on it. I'd love to know if they
> >>> measured more minimum rtts that can be obtained with fq_codel or cake,
> >>> correctly.
> >>> Love Always,
> >>> The Grinch
> >>> --
> >>> This song goes out to all the folk that thought Stadia would work:
> >>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> >>> Dave Täht CEO, TekLibre, LLC
> >>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> >>> Rpm mailing list
> >>> Rpm@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/rpm
> >> _______________________________________________
> >> Rpm mailing list
> >> Rpm@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/rpm
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [LibreQoS] the grinch meets cloudflare's christmas present
2023-01-04 17:26 [Cake] the grinch meets cloudflare's christmas present Dave Taht
2023-01-04 19:20 ` [Cake] [Rpm] " jf
@ 2023-01-06 16:38 ` MORTON JR., AL
2023-01-06 20:38 ` [Cake] [Rpm] " rjmcmahon
1 sibling, 1 reply; 26+ messages in thread
From: MORTON JR., AL @ 2023-01-06 16:38 UTC (permalink / raw)
To: Dave Taht, bloat, libreqos, Cake List, Dave Taht via Starlink,
Rpm, IETF IPPM WG
[-- Attachment #1.1: Type: text/plain, Size: 2347 bytes --]
> -----Original Message-----
> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of Dave Taht
> via LibreQoS
> Sent: Wednesday, January 4, 2023 12:26 PM
> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
> Please try the new, the shiny, the really wonderful test here:
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S<https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$>
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$<https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$>
>
> I would really appreciate some independent verification of
> measurements using this tool. In my brief experiments it appears - as
> all the commercial tools to date - to dramatically understate the
> bufferbloat, on my LTE, (and my starlink terminal is out being
> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
[acm]
Hi Dave, I made some time to test "cloudflare's christmas present" yesterday.
I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo" mode with 40-50Mbps for the first ~3 sec, then steady-state about 23Mbps.
When I saw the ~620Mbps Downstream measurement, I was ready to complain that even the IP-Layer Capacity was grossly underestimated. In addition, the Latency measurements seem very low (as you asserted), although the cloud server was “nearby”.
However, I found that Ookla and the ISP-provided measurement were also reporting ~600Mbps! So the cloudflare Downstream capacity (or throughput?) measurement was consistent with others. Our UDPST server was unreachable, otherwise I would have added that measurement, too.
The Upstream measurement graph seems to illustrate the “turbo” mode, with the dip after attaining 44.5Mbps.
UDPST saturates the uplink and we measure the full 250ms of the Upstream buffer. Cloudflare’s latency measurements don’t even come close.
Al
[Screen Shot 2023-01-05 at 5.54.26 PM.png][Screen Shot 2023-01-05 at 5.54.53 PM.png][Screen Shot 2023-01-05 at 5.55.39 PM.png]
[-- Attachment #1.2: Type: text/html, Size: 42872 bytes --]
[-- Attachment #2: image001.png --]
[-- Type: image/png, Size: 176230 bytes --]
[-- Attachment #3: image002.png --]
[-- Type: image/png, Size: 461849 bytes --]
[-- Attachment #4: image003.png --]
[-- Type: image/png, Size: 225012 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Rpm] [LibreQoS] the grinch meets cloudflare's christmas present
2023-01-06 16:38 ` [Cake] [LibreQoS] " MORTON JR., AL
@ 2023-01-06 20:38 ` rjmcmahon
2023-01-06 20:47 ` rjmcmahon
2023-01-06 23:29 ` [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present Dick Roy
0 siblings, 2 replies; 26+ messages in thread
From: rjmcmahon @ 2023-01-06 20:38 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, bloat, libreqos, Cake List, Dave Taht via Starlink,
Rpm, IETF IPPM WG
Some thoughts are not to use UDP for testing here. Also, these speed
tests have little to no information for network engineers about what's
going on. Iperf 2 may better assist network engineers but then I'm
biased ;)
Running iperf 2 https://sourceforge.net/projects/iperf2/ with
--trip-times. Though the sampling and central limit theorem averaging is
hiding the real distributions (use --histograms to get those)
Below are 4 parallel TCP streams from my home to one of my servers in
the cloud. First where TCP is limited per CCA. Second is source side
write rate limiting. Things to note:
o) connect times for both at 10-15 ms
o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
Source side pacing eliminates retries
o) Fairness with CCA isn't great but quite good with source side write
pacing
o) Queue depth with CCA is about 150 Kbytes about 100K byte with source
side pacing
o) min write to read is about 80 ms for both
o) max is 220 ms vs 97 ms
o) stdev for CCA write/read is 30 ms vs 3 ms
o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
TCP_QUICACK and TCP_NODELAY are both enabled.
[ CT] final connect times (min/avg/max/stdev) =
10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
flows)
Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
(PST)
[ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
(PST)
[ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
(PST)
[ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=4) (qack)
(icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
...
[ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
29K/21088(1142) us 68.37
[ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
31K/19284(612) us 88.36
[ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
16K/18996(658) us 48.30
[ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
18K/18133(208) us 57.83
[SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
[ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
29K/14717(489) us 89.06
[ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
16K/15874(408) us 66.06
[ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
16K/15826(382) us 74.54
[ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
9K/14878(557) us 106
[SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
[ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
25K/15472(496) us 119
[ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
26K/16417(427) us 63.87
[ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
16K/16268(679) us 80.57
[ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
15K/16629(799) us 63.06
[SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
[ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
22K/13986(519) us 131
[ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
16K/12679(377) us 93.04
[ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
14K/12971(367) us 70.74
[ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
15K/14740(779) us 80.03
[SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
------------------------------------------------------------
Server listening on TCP port 5001 with pid 233615
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
------------------------------------------------------------
[ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42480
(trip-times) (sock=4) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
[ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42502
(trip-times) (sock=5) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
[ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42484
(trip-times) (sock=6) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
[ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42488
(trip-times) (sock=7) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
...
[ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
467=461:6:0:0:0:0:0:0
[ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
646=643:1:2:0:0:0:0:0
[ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
619=617:1:1:0:0:0:0:0
[ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
409=404:5:0:0:0:0:0:0
[SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
2141=2125:13:3:0:0:0:0:0
[ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
511=506:2:3:0:0:0:0:0
[ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
487=484:2:1:0:0:0:0:0
[ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
640=636:1:3:0:0:0:0:0
[ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
415=413:2:0:0:0:0:0:0
[SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
2053=2039:7:7:0:0:0:0:0
[ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
711=707:3:1:0:0:0:0:0
[ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
494=488:5:1:0:0:0:0:0
[ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
408=403:5:0:0:0:0:0:0
[ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
484=480:4:0:0:0:0:0:0
[SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
2097=2078:17:2:0:0:0:0:0
[ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
692=687:2:3:0:0:0:0:0
[ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
429=425:4:0:0:0:0:0:0
[ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
520=516:4:0:0:0:0:0:0
[ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
527=524:1:2:0:0:0:0:0
[SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
2168=2152:11:5:0:0:0:0:0
With source side rate limiting to 9 mb/s per stream.
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
flows)
Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
(PST)
[ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
(PST)
[ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=4) (qack)
(icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
(PST)
[ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
...
[ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/88055(12329) us 11.91
[ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
132K/74867(11755) us 14.01
[ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/89101(13134) us 11.77
[ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
131K/91451(11938) us 11.47
[SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
[ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/85135(14580) us 13.86
[ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
132K/85124(15654) us 13.86
[ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/91336(11335) us 12.92
[ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
131K/89185(13499) us 13.23
[SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
[ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/85687(13489) us 13.77
[ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
132K/82803(13001) us 14.25
[ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/86869(15186) us 13.58
[ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
131K/91447(12515) us 12.90
[SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
[ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/81814(13168) us 12.82
[ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
132K/89008(13283) us 11.78
[ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/89494(12151) us 11.72
[ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
131K/91083(12797) us 11.51
[SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
------------------------------------------------------------
Server listening on TCP port 5001 with pid 233981
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
------------------------------------------------------------
[ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46448
(trip-times) (sock=4) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
[ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46454
(trip-times) (sock=5) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
[ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46460
(trip-times) (sock=6) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
[ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46458
(trip-times) (sock=7) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
451=451:0:0:0:0:0:0:0
[ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
472=472:0:0:0:0:0:0:0
[ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
495=495:0:0:0:0:0:0:0
[ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
489=489:0:0:0:0:0:0:0
[ ID] Interval Transfer Bandwidth Reads=Dist
[SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
1907=1907:0:0:0:0:0:0:0
[ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
492=492:0:0:0:0:0:0:0
[ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
480=480:0:0:0:0:0:0:0
[ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
513=513:0:0:0:0:0:0:0
[ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
489=489:0:0:0:0:0:0:0
[SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
1974=1974:0:0:0:0:0:0:0
[ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
497=497:0:0:0:0:0:0:0
[ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
503=503:0:0:0:0:0:0:0
[ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
509=509:0:0:0:0:0:0:0
[ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
509=509:0:0:0:0:0:0:0
[SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
2018=2018:0:0:0:0:0:0:0
[ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
487=487:0:0:0:0:0:0:0
[ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
489=489:0:0:0:0:0:0:0
[ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
493=493:0:0:0:0:0:0:0
[ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
493=493:0:0:0:0:0:0:0
[SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
1962=1962:0:0:0:0:0:0:0
[ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
498=498:0:0:0:0:0:0:0
[ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
490=490:0:0:0:0:0:0:0
[ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
481=481:0:0:0:0:0:0:0
[ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
484=484:0:0:0:0:0:0:0
[SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
1953=1953:0:0:0:0:0:0:0
[ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
467=467:0:0:0:0:0:0:0
[ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
478=478:0:0:0:0:0:0:0
[ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
491=491:0:0:0:0:0:0:0
[ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
488=488:0:0:0:0:0:0:0
[SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
1924=1924:0:0:0:0:0:0:0
[ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
485=485:0:0:0:0:0:0:0
[ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
472=472:0:0:0:0:0:0:0
[ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
477=477:0:0:0:0:0:0:0
[ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
494=494:0:0:0:0:0:0:0
[SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
1928=1928:0:0:0:0:0:0:0
[ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
510=510:0:0:0:0:0:0:0
[ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
480=480:0:0:0:0:0:0:0
[ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
515=515:0:0:0:0:0:0:0
[ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
508=508:0:0:0:0:0:0:0
[SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
2013=2013:0:0:0:0:0:0:0
Bob
>> -----Original Message-----
>
>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of
> Dave Taht
>
>> via LibreQoS
>
>> Sent: Wednesday, January 4, 2023 12:26 PM
>
>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
>>
>
>> Please try the new, the shiny, the really wonderful test here:
>
>>
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
> [1]
>
>>
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
> [1]
>
>>
>
>> I would really appreciate some independent verification of
>
>> measurements using this tool. In my brief experiments it appears -
> as
>
>> all the commercial tools to date - to dramatically understate the
>
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
> [acm]
>
> Hi Dave, I made some time to test "cloudflare's christmas present"
> yesterday.
>
> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo"
> mode with 40-50Mbps for the first ~3 sec, then steady-state about
> 23Mbps.
>
> When I saw the ~620Mbps Downstream measurement, I was ready to
> complain that even the IP-Layer Capacity was grossly underestimated.
> In addition, the Latency measurements seem very low (as you asserted),
> although the cloud server was “nearby”.
>
> However, I found that Ookla and the ISP-provided measurement were also
> reporting ~600Mbps! So the cloudflare Downstream capacity (or
> throughput?) measurement was consistent with others. Our UDPST server
> was unreachable, otherwise I would have added that measurement, too.
>
> The Upstream measurement graph seems to illustrate the “turbo”
> mode, with the dip after attaining 44.5Mbps.
>
> UDPST saturates the uplink and we measure the full 250ms of the
> Upstream buffer. Cloudflare’s latency measurements don’t even come
> close.
>
> Al
>
>
>
> Links:
> ------
> [1]
> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Rpm] [LibreQoS] the grinch meets cloudflare's christmas present
2023-01-06 20:38 ` [Cake] [Rpm] " rjmcmahon
@ 2023-01-06 20:47 ` rjmcmahon
2023-01-06 23:29 ` [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present Dick Roy
1 sibling, 0 replies; 26+ messages in thread
From: rjmcmahon @ 2023-01-06 20:47 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, bloat, libreqos, Cake List, Dave Taht via Starlink,
Rpm, IETF IPPM WG
For responsiveness, the bounceback seems reasonable even with upstream
competition. Bunch more TCP retries though.
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 --bounceback -t 3
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 111022 (1
flows)
Write buffer size: 100 Byte
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs &
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.86%enp7s0 port 36976 connected with *.*.*.123 port
5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times)
(sock=3) (icwnd/mss/irtt=14/1448/9862) (ct=9.90 ms) on 2023-01-06
12:42:18 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=12.195/9.298/16.457/2.679 ms 0 14K/11327 us 82 rps
[ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=12.613/9.271/15.489/2.788 ms 0 14K/12165 us 79 rps
[ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=13.390/9.376/15.986/2.520 ms 0 14K/13164 us 75 rps
[ 1] 0.00-3.03 sec 5.86 KBytes 15.8 Kbits/sec
30=12.733/9.271/16.457/2.620 ms 0 14K/15138 us 79 rps
[ 1] 0.00-3.03 sec OWD Delays (ms) Cnt=30 To=7.937/4.634/11.327/2.457
From=4.778/4.401/5.350/0.258 Asymmetry=3.166/0.097/6.311/2.318 79 rps
[ 1] 0.00-3.03 sec BB8(f)-PDF:
bin(w=100us):cnt(30)=93:2,94:3,95:2,97:1,100:1,102:1,105:1,114:2,142:1,143:1,144:2,145:3,146:1,147:1,148:1,151:1,152:1,154:1,155:1,156:1,160:1,165:1
(5.00/95.00/99.7%=93/160/165,Outliers=0,obl/obu=0/0)
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 --bounceback -t 3 --bounceback-congest=up,4
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 111069 (1
flows)
Write buffer size: 100 Byte
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs &
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 2] local *.*.*.85%enp4s0 port 38342 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10613) (ct=10.66 ms) on 2023-01-06 12:42:36
(PST)
[ 1] local *.*.*.85%enp4s0 port 38360 connected with *.*.*.123 port
5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times)
(sock=4) (icwnd/mss/irtt=14/1448/14901) (ct=14.96 ms) on 2023-01-06
12:42:36 (PST)
[ 3] local *.*.*.85%enp4s0 port 38386 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=7) (qack)
(icwnd/mss/irtt=14/1448/15295) (ct=15.31 ms) on 2023-01-06 12:42:36
(PST)
[ 4] local *.*.*.85%enp4s0 port 38348 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/14901) (ct=14.95 ms) on 2023-01-06 12:42:36
(PST)
[ 5] local *.*.*.85%enp4s0 port 38372 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/15371) (ct=15.42 ms) on 2023-01-06 12:42:36
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
[ 3] 0.00-1.00 sec 1.29 MBytes 10.8 Mbits/sec 13502/0 115
28K/22594(904) us 59.76
[ 4] 0.00-1.00 sec 1.63 MBytes 13.6 Mbits/sec 17048/0 140
42K/22728(568) us 75.01
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=76.140/17.224/123.195/43.168 ms 0 14K/68136 us 13 rps
[ 5] 0.00-1.00 sec 1.04 MBytes 8.72 Mbits/sec 10893/0 82
25K/23400(644) us 46.55
[SUM] 0.00-1.00 sec 3.95 MBytes 33.2 Mbits/sec 41443/0 337
[ 2] 0.00-1.00 sec 1.10 MBytes 9.25 Mbits/sec 11566/0 77
22K/23557(432) us 49.10
[ 3] 1.00-2.00 sec 1.24 MBytes 10.4 Mbits/sec 13037/0 20
28K/14427(503) us 90.37
[ 4] 1.00-2.00 sec 1.43 MBytes 12.0 Mbits/sec 14954/0 31
12K/13348(407) us 112
[ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=14.581/10.801/20.356/3.599 ms 0 14K/27791 us 69 rps
[ 5] 1.00-2.00 sec 1.26 MBytes 10.6 Mbits/sec 13191/0 16
12K/14749(675) us 89.44
[SUM] 1.00-2.00 sec 3.93 MBytes 32.9 Mbits/sec 41182/0 67
[ 2] 1.00-2.00 sec 1000 KBytes 8.19 Mbits/sec 10237/0 13
19K/14467(1068) us 70.76
[ 3] 2.00-3.00 sec 1.33 MBytes 11.2 Mbits/sec 13994/0 4
24K/20749(495) us 67.44
[ 4] 2.00-3.00 sec 1.20 MBytes 10.1 Mbits/sec 12615/0 3
31K/20877(718) us 60.43
[ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=11.298/9.407/14.245/1.330 ms 0 14K/15474 us 89 rps
[ 5] 2.00-3.00 sec 1.08 MBytes 9.03 Mbits/sec 11284/0 3
28K/21031(430) us 53.65
[SUM] 2.00-3.00 sec 3.61 MBytes 30.3 Mbits/sec 37893/0 10
[ 2] 2.00-3.00 sec 1.29 MBytes 10.8 Mbits/sec 13492/0 3
29K/20409(688) us 66.11
[ 3] 0.00-3.03 sec 3.87 MBytes 10.7 Mbits/sec 40534/0 139
25K/20645(557) us 64.85
[ 5] 0.00-3.03 sec 3.37 MBytes 9.35 Mbits/sec 35369/0 101
29K/20489(668) us 57.02
[ 4] 0.00-3.03 sec 4.26 MBytes 11.8 Mbits/sec 44618/0 174
32K/21240(961) us 69.40
[ 2] 0.00-3.03 sec 3.37 MBytes 9.31 Mbits/sec 35296/0 94
19K/21504(948) us 54.13
[ 1] 0.00-3.14 sec 7.81 KBytes 20.4 Kbits/sec
40=28.332/5.611/123.195/34.940 ms 0 14K/14000 us 35 rps
[ 1] 0.00-3.14 sec OWD Delays (ms) Cnt=40
To=23.730/1.110/118.744/34.957 From=4.567/4.356/5.171/0.141
Asymmetry=19.332/0.189/114.294/34.869 35 rps
[ 1] 0.00-3.14 sec BB8(f)-PDF:
bin(w=100us):cnt(40)=57:1,94:2,95:2,96:2,98:1,101:1,106:1,109:2,111:1,112:2,113:1,115:1,118:1,119:2,143:2,145:1,146:1,152:1,158:1,173:1,176:1,194:1,195:1,204:1,205:1,274:1,554:1,790:1,925:1,1125:1,1126:1,1225:1,1232:1
(5.00/95.00/99.7%=94/1225/1232,Outliers=0,obl/obu=0/0)
[SUM] 0.00-3.11 sec 11.5 MBytes 31.0 Mbits/sec 120521/0 414
[ CT] final connect times (min/avg/max/stdev) =
10.661/14.261/15.423/2023.369 ms (tot/err) = 5/0
> Some thoughts are not to use UDP for testing here. Also, these speed
> tests have little to no information for network engineers about what's
> going on. Iperf 2 may better assist network engineers but then I'm
> biased ;)
>
> Running iperf 2 https://sourceforge.net/projects/iperf2/ with
> --trip-times. Though the sampling and central limit theorem averaging
> is hiding the real distributions (use --histograms to get those)
>
> Below are 4 parallel TCP streams from my home to one of my servers in
> the cloud. First where TCP is limited per CCA. Second is source side
> write rate limiting. Things to note:
>
> o) connect times for both at 10-15 ms
> o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
> Source side pacing eliminates retries
> o) Fairness with CCA isn't great but quite good with source side write
> pacing
> o) Queue depth with CCA is about 150 Kbytes about 100K byte with
> source side pacing
> o) min write to read is about 80 ms for both
> o) max is 220 ms vs 97 ms
> o) stdev for CCA write/read is 30 ms vs 3 ms
> o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
> TCP_QUICACK and TCP_NODELAY are both enabled.
>
> [ CT] final connect times (min/avg/max/stdev) =
> 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
> ------------------------------------------------------------
> Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
> flows)
> Write buffer size: 131072 Byte
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
> (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
> (PST)
> [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
> (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
> (PST)
> [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
> (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
> (PST)
> [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
> (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
> (PST)
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
> Cwnd/RTT(var) NetPwr
> ...
> [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
> 29K/21088(1142) us 68.37
> [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
> 31K/19284(612) us 88.36
> [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
> 16K/18996(658) us 48.30
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
> 18K/18133(208) us 57.83
> [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
> [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
> 29K/14717(489) us 89.06
> [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
> 16K/15874(408) us 66.06
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
> 16K/15826(382) us 74.54
> [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
> 9K/14878(557) us 106
> [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
> [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
> 25K/15472(496) us 119
> [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
> 26K/16417(427) us 63.87
> [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
> 16K/16268(679) us 80.57
> [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
> 15K/16629(799) us 63.06
> [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
> [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
> 22K/13986(519) us 131
> [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
> 16K/12679(377) us 93.04
> [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
> 14K/12971(367) us 70.74
> [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
> 15K/14740(779) us 80.03
> [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 233615
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> ------------------------------------------------------------
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42480 (trip-times) (sock=4) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42502 (trip-times) (sock=5) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42484 (trip-times) (sock=6) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42488 (trip-times) (sock=7) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
> [ ID] Interval Transfer Bandwidth Burst Latency
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
> ...
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
> 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
> 467=461:6:0:0:0:0:0:0
> [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
> 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
> 646=643:1:2:0:0:0:0:0
> [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
> 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
> 619=617:1:1:0:0:0:0:0
> [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
> 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
> 409=404:5:0:0:0:0:0:0
> [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
> 2141=2125:13:3:0:0:0:0:0
> [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
> 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
> 511=506:2:3:0:0:0:0:0
> [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
> 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
> 487=484:2:1:0:0:0:0:0
> [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
> 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
> 640=636:1:3:0:0:0:0:0
> [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
> 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
> 415=413:2:0:0:0:0:0:0
> [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
> 2053=2039:7:7:0:0:0:0:0
> [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
> 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
> 711=707:3:1:0:0:0:0:0
> [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
> 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
> 494=488:5:1:0:0:0:0:0
> [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
> 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
> 408=403:5:0:0:0:0:0:0
> [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
> 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
> 484=480:4:0:0:0:0:0:0
> [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
> 2097=2078:17:2:0:0:0:0:0
> [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
> 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
> 692=687:2:3:0:0:0:0:0
> [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
> 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
> 429=425:4:0:0:0:0:0:0
> [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
> 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
> 520=516:4:0:0:0:0:0:0
> [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
> 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
> 527=524:1:2:0:0:0:0:0
> [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
> 2168=2152:11:5:0:0:0:0:0
>
> With source side rate limiting to 9 mb/s per stream.
>
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
> ------------------------------------------------------------
> Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
> flows)
> Write buffer size: 131072 Byte
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
> (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
> (PST)
> [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
> (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
> (PST)
> [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
> (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
> (PST)
> [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
> (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
> (PST)
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
> Cwnd/RTT(var) NetPwr
> ...
> [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/88055(12329) us 11.91
> [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 132K/74867(11755) us 14.01
> [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/89101(13134) us 11.77
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 131K/91451(11938) us 11.47
> [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
> [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/85135(14580) us 13.86
> [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 132K/85124(15654) us 13.86
> [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/91336(11335) us 12.92
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 131K/89185(13499) us 13.23
> [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
> [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/85687(13489) us 13.77
> [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 132K/82803(13001) us 14.25
> [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/86869(15186) us 13.58
> [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 131K/91447(12515) us 12.90
> [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
> [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/81814(13168) us 12.82
> [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 132K/89008(13283) us 11.78
> [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/89494(12151) us 11.72
> [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 131K/91083(12797) us 11.51
> [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 233981
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> ------------------------------------------------------------
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46448 (trip-times) (sock=4) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46454 (trip-times) (sock=5) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46460 (trip-times) (sock=6) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46458 (trip-times) (sock=7) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
> [ ID] Interval Transfer Bandwidth Burst Latency
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
> [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
> 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
> 451=451:0:0:0:0:0:0:0
> [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
> 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
> 472=472:0:0:0:0:0:0:0
> [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
> 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
> 495=495:0:0:0:0:0:0:0
> [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
> 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
> 489=489:0:0:0:0:0:0:0
> [ ID] Interval Transfer Bandwidth Reads=Dist
> [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
> 1907=1907:0:0:0:0:0:0:0
> [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
> 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
> 492=492:0:0:0:0:0:0:0
> [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
> 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
> 480=480:0:0:0:0:0:0:0
> [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
> 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
> 513=513:0:0:0:0:0:0:0
> [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
> 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
> 489=489:0:0:0:0:0:0:0
> [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
> 1974=1974:0:0:0:0:0:0:0
> [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
> 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
> 497=497:0:0:0:0:0:0:0
> [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
> 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
> 503=503:0:0:0:0:0:0:0
> [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
> 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
> 509=509:0:0:0:0:0:0:0
> [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
> 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
> 509=509:0:0:0:0:0:0:0
> [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
> 2018=2018:0:0:0:0:0:0:0
> [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
> 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
> 487=487:0:0:0:0:0:0:0
> [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
> 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
> 489=489:0:0:0:0:0:0:0
> [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
> 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
> 493=493:0:0:0:0:0:0:0
> [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
> 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
> 493=493:0:0:0:0:0:0:0
> [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
> 1962=1962:0:0:0:0:0:0:0
> [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
> 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
> 498=498:0:0:0:0:0:0:0
> [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
> 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
> 490=490:0:0:0:0:0:0:0
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
> 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
> 481=481:0:0:0:0:0:0:0
> [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
> 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
> 484=484:0:0:0:0:0:0:0
> [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
> 1953=1953:0:0:0:0:0:0:0
> [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
> 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
> 467=467:0:0:0:0:0:0:0
> [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
> 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
> 478=478:0:0:0:0:0:0:0
> [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
> 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
> 491=491:0:0:0:0:0:0:0
> [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
> 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
> 488=488:0:0:0:0:0:0:0
> [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
> 1924=1924:0:0:0:0:0:0:0
> [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
> 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
> 485=485:0:0:0:0:0:0:0
> [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
> 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
> 472=472:0:0:0:0:0:0:0
> [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
> 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
> 477=477:0:0:0:0:0:0:0
> [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
> 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
> 494=494:0:0:0:0:0:0:0
> [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
> 1928=1928:0:0:0:0:0:0:0
> [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
> 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
> 510=510:0:0:0:0:0:0:0
> [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
> 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
> 480=480:0:0:0:0:0:0:0
> [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
> 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
> 515=515:0:0:0:0:0:0:0
> [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
> 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
> 508=508:0:0:0:0:0:0:0
> [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
> 2013=2013:0:0:0:0:0:0:0
>
> Bob
>
>>> -----Original Message-----
>>
>>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of
>> Dave Taht
>>
>>> via LibreQoS
>>
>>> Sent: Wednesday, January 4, 2023 12:26 PM
>>
>>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>>
>>>
>>
>>> Please try the new, the shiny, the really wonderful test here:
>>
>>>
>> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
>> [1]
>>
>>>
>> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>> [1]
>>
>>>
>>
>>> I would really appreciate some independent verification of
>>
>>> measurements using this tool. In my brief experiments it appears -
>> as
>>
>>> all the commercial tools to date - to dramatically understate the
>>
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> [acm]
>>
>> Hi Dave, I made some time to test "cloudflare's christmas present"
>> yesterday.
>>
>> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo"
>> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>> 23Mbps.
>>
>> When I saw the ~620Mbps Downstream measurement, I was ready to
>> complain that even the IP-Layer Capacity was grossly underestimated.
>> In addition, the Latency measurements seem very low (as you asserted),
>> although the cloud server was “nearby”.
>>
>> However, I found that Ookla and the ISP-provided measurement were also
>> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>> throughput?) measurement was consistent with others. Our UDPST server
>> was unreachable, otherwise I would have added that measurement, too.
>>
>> The Upstream measurement graph seems to illustrate the “turbo”
>> mode, with the dip after attaining 44.5Mbps.
>>
>> UDPST saturates the uplink and we measure the full 250ms of the
>> Upstream buffer. Cloudflare’s latency measurements don’t even come
>> close.
>>
>> Al
>>
>>
>>
>> Links:
>> ------
>> [1]
>> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-06 20:38 ` [Cake] [Rpm] " rjmcmahon
2023-01-06 20:47 ` rjmcmahon
@ 2023-01-06 23:29 ` Dick Roy
2023-01-06 23:45 ` rjmcmahon
1 sibling, 1 reply; 26+ messages in thread
From: Dick Roy @ 2023-01-06 23:29 UTC (permalink / raw)
To: 'rjmcmahon', 'MORTON JR., AL'
Cc: 'IETF IPPM WG', 'libreqos', 'Cake List',
'Rpm', 'bloat'
[-- Attachment #1: Type: text/plain, Size: 21262 bytes --]
See below .
-----Original Message-----
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
rjmcmahon via Starlink
Sent: Friday, January 6, 2023 12:39 PM
To: MORTON JR., AL
Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm; bloat
Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
cloudflare'schristmas present
Some thoughts are not to use UDP for testing here. Also, these speed
tests have little to no information for network engineers about what's
going on. Iperf 2 may better assist network engineers but then I'm
biased ;)
Running iperf 2 https://sourceforge.net/projects/iperf2/ with
--trip-times. Though the sampling and central limit theorem averaging is
hiding the real distributions (use --histograms to get those)
[RR] FWIW (IMNBWM :-)). If the output/final histograms indicate the PDF is
NOT Gaussian, then any application of the CLT is
inappropriate/contra-indicated! The CLT is a "proof under certain regularity
conditions/assumptions of underlying/constituent PDFs, that the resulting
PDF (after all the necessary convolutions are performed to get to the PDF of
the output) will asymptotically approach a Gaussian with only a mean and a
std. dev. left to specify.
Below are 4 parallel TCP streams from my home to one of my servers in
the cloud. First where TCP is limited per CCA. Second is source side
write rate limiting. Things to note:
o) connect times for both at 10-15 ms
o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
Source side pacing eliminates retries
o) Fairness with CCA isn't great but quite good with source side write
pacing
o) Queue depth with CCA is about 150 Kbytes about 100K byte with source
side pacing
o) min write to read is about 80 ms for both
o) max is 220 ms vs 97 ms
o) stdev for CCA write/read is 30 ms vs 3 ms
o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
TCP_QUICACK and TCP_NODELAY are both enabled.
[ CT] final connect times (min/avg/max/stdev) =
10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
flows)
Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
(PST)
[ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
(PST)
[ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
(PST)
[ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=4) (qack)
(icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
...
[ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
29K/21088(1142) us 68.37
[ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
31K/19284(612) us 88.36
[ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
16K/18996(658) us 48.30
[ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
18K/18133(208) us 57.83
[SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
[ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
29K/14717(489) us 89.06
[ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
16K/15874(408) us 66.06
[ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
16K/15826(382) us 74.54
[ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
9K/14878(557) us 106
[SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
[ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
25K/15472(496) us 119
[ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
26K/16417(427) us 63.87
[ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
16K/16268(679) us 80.57
[ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
15K/16629(799) us 63.06
[SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
[ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
22K/13986(519) us 131
[ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
16K/12679(377) us 93.04
[ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
14K/12971(367) us 70.74
[ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
15K/14740(779) us 80.03
[SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
------------------------------------------------------------
Server listening on TCP port 5001 with pid 233615
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
------------------------------------------------------------
[ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42480
(trip-times) (sock=4) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
[ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42502
(trip-times) (sock=5) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
[ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42484
(trip-times) (sock=6) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
[ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42488
(trip-times) (sock=7) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
...
[ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
467=461:6:0:0:0:0:0:0
[ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
646=643:1:2:0:0:0:0:0
[ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
619=617:1:1:0:0:0:0:0
[ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
409=404:5:0:0:0:0:0:0
[SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
2141=2125:13:3:0:0:0:0:0
[ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
511=506:2:3:0:0:0:0:0
[ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
487=484:2:1:0:0:0:0:0
[ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
640=636:1:3:0:0:0:0:0
[ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
415=413:2:0:0:0:0:0:0
[SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
2053=2039:7:7:0:0:0:0:0
[ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
711=707:3:1:0:0:0:0:0
[ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
494=488:5:1:0:0:0:0:0
[ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
408=403:5:0:0:0:0:0:0
[ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
484=480:4:0:0:0:0:0:0
[SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
2097=2078:17:2:0:0:0:0:0
[ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
692=687:2:3:0:0:0:0:0
[ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
429=425:4:0:0:0:0:0:0
[ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
520=516:4:0:0:0:0:0:0
[ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
527=524:1:2:0:0:0:0:0
[SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
2168=2152:11:5:0:0:0:0:0
With source side rate limiting to 9 mb/s per stream.
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
flows)
Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
(PST)
[ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
(PST)
[ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=4) (qack)
(icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
(PST)
[ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
...
[ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/88055(12329) us 11.91
[ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
132K/74867(11755) us 14.01
[ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/89101(13134) us 11.77
[ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
131K/91451(11938) us 11.47
[SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
[ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/85135(14580) us 13.86
[ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
132K/85124(15654) us 13.86
[ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/91336(11335) us 12.92
[ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
131K/89185(13499) us 13.23
[SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
[ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/85687(13489) us 13.77
[ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
132K/82803(13001) us 14.25
[ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/86869(15186) us 13.58
[ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
131K/91447(12515) us 12.90
[SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
[ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/81814(13168) us 12.82
[ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
132K/89008(13283) us 11.78
[ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/89494(12151) us 11.72
[ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
131K/91083(12797) us 11.51
[SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
------------------------------------------------------------
Server listening on TCP port 5001 with pid 233981
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
------------------------------------------------------------
[ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46448
(trip-times) (sock=4) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
[ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46454
(trip-times) (sock=5) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
[ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46460
(trip-times) (sock=6) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
[ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46458
(trip-times) (sock=7) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
451=451:0:0:0:0:0:0:0
[ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
472=472:0:0:0:0:0:0:0
[ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
495=495:0:0:0:0:0:0:0
[ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
489=489:0:0:0:0:0:0:0
[ ID] Interval Transfer Bandwidth Reads=Dist
[SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
1907=1907:0:0:0:0:0:0:0
[ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
492=492:0:0:0:0:0:0:0
[ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
480=480:0:0:0:0:0:0:0
[ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
513=513:0:0:0:0:0:0:0
[ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
489=489:0:0:0:0:0:0:0
[SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
1974=1974:0:0:0:0:0:0:0
[ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
497=497:0:0:0:0:0:0:0
[ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
503=503:0:0:0:0:0:0:0
[ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
509=509:0:0:0:0:0:0:0
[ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
509=509:0:0:0:0:0:0:0
[SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
2018=2018:0:0:0:0:0:0:0
[ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
487=487:0:0:0:0:0:0:0
[ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
489=489:0:0:0:0:0:0:0
[ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
493=493:0:0:0:0:0:0:0
[ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
493=493:0:0:0:0:0:0:0
[SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
1962=1962:0:0:0:0:0:0:0
[ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
498=498:0:0:0:0:0:0:0
[ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
490=490:0:0:0:0:0:0:0
[ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
481=481:0:0:0:0:0:0:0
[ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
484=484:0:0:0:0:0:0:0
[SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
1953=1953:0:0:0:0:0:0:0
[ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
467=467:0:0:0:0:0:0:0
[ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
478=478:0:0:0:0:0:0:0
[ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
491=491:0:0:0:0:0:0:0
[ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
488=488:0:0:0:0:0:0:0
[SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
1924=1924:0:0:0:0:0:0:0
[ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
485=485:0:0:0:0:0:0:0
[ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
472=472:0:0:0:0:0:0:0
[ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
477=477:0:0:0:0:0:0:0
[ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
494=494:0:0:0:0:0:0:0
[SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
1928=1928:0:0:0:0:0:0:0
[ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
510=510:0:0:0:0:0:0:0
[ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
480=480:0:0:0:0:0:0:0
[ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
515=515:0:0:0:0:0:0:0
[ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
508=508:0:0:0:0:0:0:0
[SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
2013=2013:0:0:0:0:0:0:0
Bob
>> -----Original Message-----
>
>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of
> Dave Taht
>
>> via LibreQoS
>
>> Sent: Wednesday, January 4, 2023 12:26 PM
>
>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
>>
>
>> Please try the new, the shiny, the really wonderful test here:
>
>>
>
https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU
9S
> [1]
>
>>
>
9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw
$
> [1]
>
>>
>
>> I would really appreciate some independent verification of
>
>> measurements using this tool. In my brief experiments it appears -
> as
>
>> all the commercial tools to date - to dramatically understate the
>
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
> [acm]
>
> Hi Dave, I made some time to test "cloudflare's christmas present"
> yesterday.
>
> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo"
> mode with 40-50Mbps for the first ~3 sec, then steady-state about
> 23Mbps.
>
> When I saw the ~620Mbps Downstream measurement, I was ready to
> complain that even the IP-Layer Capacity was grossly underestimated.
> In addition, the Latency measurements seem very low (as you asserted),
> although the cloud server was "nearby".
>
> However, I found that Ookla and the ISP-provided measurement were also
> reporting ~600Mbps! So the cloudflare Downstream capacity (or
> throughput?) measurement was consistent with others. Our UDPST server
> was unreachable, otherwise I would have added that measurement, too.
>
> The Upstream measurement graph seems to illustrate the "turbo"
> mode, with the dip after attaining 44.5Mbps.
>
> UDPST saturates the uplink and we measure the full 250ms of the
> Upstream buffer. Cloudflare's latency measurements don't even come
> close.
>
> Al
>
>
>
> Links:
> ------
> [1]
>
https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9
S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKV
w$
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 82978 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-06 23:29 ` [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present Dick Roy
@ 2023-01-06 23:45 ` rjmcmahon
2023-01-07 0:31 ` Dick Roy
0 siblings, 1 reply; 26+ messages in thread
From: rjmcmahon @ 2023-01-06 23:45 UTC (permalink / raw)
To: dickroy
Cc: 'MORTON JR., AL', 'IETF IPPM WG',
'libreqos', 'Cake List', 'Rpm',
'bloat'
yeah, I'd prefer not to output CLT sample groups at all but the
histograms aren't really human readable and users constantly ask for
them. I thought about providing a distance from the gaussian as output
too but so far few would understand it and nobody I found would act upon
it. The tool produces the full histograms so no information is really
missing except for maybe better time series analysis.
The open source flows python code also released with iperf 2 does use
the komogorov-smirnov distances & distance matrices to cluster when the
number of histograms are just too much. We've analyzed 1M runs to fault
isolate the "unexpected interruptions" or "bugs" and without statistical
support it is just not doable. This does require instrumentation of the
full path with mapping to a common clock domain (e.g. GPS) and not just
e2e stats. I find an e2e complaint by an end user about "poor speed" as
useful as telling a pharmacist I have a fever. Not much diagnostically
is going on. Take an aspirin.
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py
Bob
> See below …
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
> Behalf Of rjmcmahon via Starlink
> Sent: Friday, January 6, 2023 12:39 PM
> To: MORTON JR., AL
> Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
> bloat
> Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
> cloudflare'schristmas present
>
> Some thoughts are not to use UDP for testing here. Also, these speed
>
> tests have little to no information for network engineers about what's
>
>
> going on. Iperf 2 may better assist network engineers but then I'm
>
> biased ;)
>
> Running iperf 2 https://sourceforge.net/projects/iperf2/ with
>
> --trip-times. Though the sampling and central limit theorem averaging
> is
>
> hiding the real distributions (use --histograms to get those)
>
> _[RR] FWIW (IMNBWM __J)… If the output/final histograms indicate the
> PDF is NOT Gaussian, then any application of the CLT is
> inappropriate/contra-indicated! The CLT is a "proof under certain
> regularity conditions/assumptions of underlying/constituent PDFs, that
> the resulting PDF (after all the necessary convolutions are performed
> to get to the PDF of the output) will asymptotically approach a
> Gaussian with only a mean and a std. dev. left to specify. _
>
> Below are 4 parallel TCP streams from my home to one of my servers in
>
> the cloud. First where TCP is limited per CCA. Second is source side
>
> write rate limiting. Things to note:
>
> o) connect times for both at 10-15 ms
>
> o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
>
> Source side pacing eliminates retries
>
> o) Fairness with CCA isn't great but quite good with source side write
>
>
> pacing
>
> o) Queue depth with CCA is about 150 Kbytes about 100K byte with
> source
>
> side pacing
>
> o) min write to read is about 80 ms for both
>
> o) max is 220 ms vs 97 ms
>
> o) stdev for CCA write/read is 30 ms vs 3 ms
>
> o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
>
> TCP_QUICACK and TCP_NODELAY are both enabled.
>
> [ CT] final connect times (min/avg/max/stdev) =
>
> 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
>
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
>
> ------------------------------------------------------------
>
> Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
>
> flows)
>
> Write buffer size: 131072 Byte
>
> TOS set to 0x0 and nodelay (Nagle off)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> Event based writes (pending queue watermark at 16384 bytes)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>
> (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>
> (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>
> (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>
> (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
>
> Cwnd/RTT(var) NetPwr
>
> ...
>
> [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
>
>
> 29K/21088(1142) us 68.37
>
> [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
>
>
> 31K/19284(612) us 88.36
>
> [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>
> 16K/18996(658) us 48.30
>
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
>
> 18K/18133(208) us 57.83
>
> [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
>
> [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
>
>
> 29K/14717(489) us 89.06
>
> [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
>
> 16K/15874(408) us 66.06
>
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>
> 16K/15826(382) us 74.54
>
> [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
>
>
> 9K/14878(557) us 106
>
> [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
>
> [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>
>
> 25K/15472(496) us 119
>
> [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
>
> 26K/16417(427) us 63.87
>
> [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
>
>
> 16K/16268(679) us 80.57
>
> [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
>
> 15K/16629(799) us 63.06
>
> [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
>
> [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>
>
> 22K/13986(519) us 131
>
> [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>
> 16K/12679(377) us 93.04
>
> [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>
> 14K/12971(367) us 70.74
>
> [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
>
> 15K/14740(779) us 80.03
>
> [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001 with pid 233615
>
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42480
>
> (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
>
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42502
>
> (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
>
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42484
>
> (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
>
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42488
>
> (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
>
> [ ID] Interval Transfer Bandwidth Burst Latency
>
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>
> ...
>
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
>
> 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
>
> 467=461:6:0:0:0:0:0:0
>
> [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
>
> 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
>
> 646=643:1:2:0:0:0:0:0
>
> [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
>
> 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
>
> 619=617:1:1:0:0:0:0:0
>
> [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
>
> 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
>
> 409=404:5:0:0:0:0:0:0
>
> [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
>
> 2141=2125:13:3:0:0:0:0:0
>
> [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
>
> 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
>
> 511=506:2:3:0:0:0:0:0
>
> [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
>
> 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
>
> 487=484:2:1:0:0:0:0:0
>
> [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
>
> 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
>
> 640=636:1:3:0:0:0:0:0
>
> [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
>
> 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
>
> 415=413:2:0:0:0:0:0:0
>
> [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
>
> 2053=2039:7:7:0:0:0:0:0
>
> [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
>
> 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
>
> 711=707:3:1:0:0:0:0:0
>
> [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
>
> 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
>
> 494=488:5:1:0:0:0:0:0
>
> [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
>
> 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
>
> 408=403:5:0:0:0:0:0:0
>
> [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
>
> 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
>
> 484=480:4:0:0:0:0:0:0
>
> [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
>
> 2097=2078:17:2:0:0:0:0:0
>
> [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
>
> 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
>
> 692=687:2:3:0:0:0:0:0
>
> [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
>
> 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
>
> 429=425:4:0:0:0:0:0:0
>
> [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
>
> 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
>
> 520=516:4:0:0:0:0:0:0
>
> [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
>
> 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
>
> 527=524:1:2:0:0:0:0:0
>
> [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
>
> 2168=2152:11:5:0:0:0:0:0
>
> With source side rate limiting to 9 mb/s per stream.
>
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
>
> ------------------------------------------------------------
>
> Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
>
> flows)
>
> Write buffer size: 131072 Byte
>
> TOS set to 0x0 and nodelay (Nagle off)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> Event based writes (pending queue watermark at 16384 bytes)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>
> (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>
> (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>
> (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>
> (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
>
> Cwnd/RTT(var) NetPwr
>
> ...
>
> [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/88055(12329) us 11.91
>
> [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 132K/74867(11755) us 14.01
>
> [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/89101(13134) us 11.77
>
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 131K/91451(11938) us 11.47
>
> [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/85135(14580) us 13.86
>
> [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 132K/85124(15654) us 13.86
>
> [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/91336(11335) us 12.92
>
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 131K/89185(13499) us 13.23
>
> [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>
> [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/85687(13489) us 13.77
>
> [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 132K/82803(13001) us 14.25
>
> [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/86869(15186) us 13.58
>
> [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 131K/91447(12515) us 12.90
>
> [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>
> [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/81814(13168) us 12.82
>
> [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 132K/89008(13283) us 11.78
>
> [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/89494(12151) us 11.72
>
> [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 131K/91083(12797) us 11.51
>
> [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001 with pid 233981
>
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46448
>
> (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
>
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46454
>
> (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
>
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46460
>
> (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
>
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46458
>
> (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
>
> [ ID] Interval Transfer Bandwidth Burst Latency
>
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>
> [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>
> 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
>
> 451=451:0:0:0:0:0:0:0
>
> [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
>
> 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
>
> 472=472:0:0:0:0:0:0:0
>
> [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
>
> 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
>
> 495=495:0:0:0:0:0:0:0
>
> [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>
> 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
>
> 489=489:0:0:0:0:0:0:0
>
> [ ID] Interval Transfer Bandwidth Reads=Dist
>
> [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
>
> 1907=1907:0:0:0:0:0:0:0
>
> [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
>
> 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
>
> 492=492:0:0:0:0:0:0:0
>
> [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
>
> 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
>
> 480=480:0:0:0:0:0:0:0
>
> [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
>
> 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
>
> 513=513:0:0:0:0:0:0:0
>
> [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
>
> 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
>
> 489=489:0:0:0:0:0:0:0
>
> [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
>
> 1974=1974:0:0:0:0:0:0:0
>
> [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
>
> 497=497:0:0:0:0:0:0:0
>
> [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
>
> 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
>
> 503=503:0:0:0:0:0:0:0
>
> [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
>
> 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
>
> 509=509:0:0:0:0:0:0:0
>
> [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
>
> 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
>
> 509=509:0:0:0:0:0:0:0
>
> [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
>
> 2018=2018:0:0:0:0:0:0:0
>
> [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
>
> 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
>
> 487=487:0:0:0:0:0:0:0
>
> [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
>
> 489=489:0:0:0:0:0:0:0
>
> [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
>
> 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
>
> 493=493:0:0:0:0:0:0:0
>
> [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
>
> 493=493:0:0:0:0:0:0:0
>
> [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
>
> 1962=1962:0:0:0:0:0:0:0
>
> [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
>
> 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
>
> 498=498:0:0:0:0:0:0:0
>
> [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
>
> 490=490:0:0:0:0:0:0:0
>
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
>
> 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
>
> 481=481:0:0:0:0:0:0:0
>
> [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
>
> 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
>
> 484=484:0:0:0:0:0:0:0
>
> [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
>
> 1953=1953:0:0:0:0:0:0:0
>
> [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
>
> 467=467:0:0:0:0:0:0:0
>
> [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
>
> 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
>
> 478=478:0:0:0:0:0:0:0
>
> [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
>
> 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
>
> 491=491:0:0:0:0:0:0:0
>
> [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
>
> 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
>
> 488=488:0:0:0:0:0:0:0
>
> [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
>
> 1924=1924:0:0:0:0:0:0:0
>
> [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
>
> 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
>
> 485=485:0:0:0:0:0:0:0
>
> [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
>
> 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
>
> 472=472:0:0:0:0:0:0:0
>
> [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
>
> 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
>
> 477=477:0:0:0:0:0:0:0
>
> [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
>
> 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
>
> 494=494:0:0:0:0:0:0:0
>
> [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
>
> 1928=1928:0:0:0:0:0:0:0
>
> [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
>
> 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
>
> 510=510:0:0:0:0:0:0:0
>
> [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
>
> 480=480:0:0:0:0:0:0:0
>
> [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
>
> 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
>
> 515=515:0:0:0:0:0:0:0
>
> [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
>
> 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
>
> 508=508:0:0:0:0:0:0:0
>
> [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
>
> 2013=2013:0:0:0:0:0:0:0
>
> Bob
>
>>> -----Original Message-----
>
>>
>
>>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf
> Of
>
>> Dave Taht
>
>>
>
>>> via LibreQoS
>
>>
>
>>> Sent: Wednesday, January 4, 2023 12:26 PM
>
>>
>
>>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
>
>>
>
>>>
>
>>
>
>>> Please try the new, the shiny, the really wonderful test here:
>
>>
>
>>>
>
>>
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
>
>
>> [1]
>
>>
>
>>>
>
>>
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>
>
>> [1]
>
>>
>
>>>
>
>>
>
>>> I would really appreciate some independent verification of
>
>>
>
>>> measurements using this tool. In my brief experiments it appears -
>
>> as
>
>>
>
>>> all the commercial tools to date - to dramatically understate the
>
>>
>
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>
>>
>
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
>>
>
>> [acm]
>
>>
>
>> Hi Dave, I made some time to test "cloudflare's christmas present"
>
>> yesterday.
>
>>
>
>> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a
> "turbo"
>
>> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>
>> 23Mbps.
>
>>
>
>> When I saw the ~620Mbps Downstream measurement, I was ready to
>
>> complain that even the IP-Layer Capacity was grossly underestimated.
>
>
>> In addition, the Latency measurements seem very low (as you
> asserted),
>
>> although the cloud server was "nearby".
>
>>
>
>> However, I found that Ookla and the ISP-provided measurement were
> also
>
>> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>
>> throughput?) measurement was consistent with others. Our UDPST
> server
>
>> was unreachable, otherwise I would have added that measurement, too.
>
>
>>
>
>> The Upstream measurement graph seems to illustrate the "turbo"
>
>> mode, with the dip after attaining 44.5Mbps.
>
>>
>
>> UDPST saturates the uplink and we measure the full 250ms of the
>
>> Upstream buffer. Cloudflare's latency measurements don't even come
>
>> close.
>
>>
>
>> Al
>
>>
>
>>
>
>>
>
>> Links:
>
>> ------
>
>> [1]
>
>>
> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>
>
>> _______________________________________________
>
>> Rpm mailing list
>
>> Rpm@lists.bufferbloat.net
>
>> https://lists.bufferbloat.net/listinfo/rpm
>
> _______________________________________________
>
> Starlink mailing list
>
> Starlink@lists.bufferbloat.net
>
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-06 23:45 ` rjmcmahon
@ 2023-01-07 0:31 ` Dick Roy
2023-01-10 17:25 ` [Cake] [Bloat] " Luis A. Cornejo
0 siblings, 1 reply; 26+ messages in thread
From: Dick Roy @ 2023-01-07 0:31 UTC (permalink / raw)
To: 'rjmcmahon'
Cc: 'MORTON JR., AL', 'IETF IPPM WG',
'libreqos', 'Cake List', 'Rpm',
'bloat'
[-- Attachment #1: Type: text/plain, Size: 25562 bytes --]
-----Original Message-----
From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com]
Sent: Friday, January 6, 2023 3:45 PM
To: dickroy@alum.mit.edu
Cc: 'MORTON JR., AL'; 'IETF IPPM WG'; 'libreqos'; 'Cake List'; 'Rpm';
'bloat'
Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
cloudflare'schristmas present
yeah, I'd prefer not to output CLT sample groups at all but the
histograms aren't really human readable and users constantly ask for
them. I thought about providing a distance from the gaussian as output
too but so far few would understand it and nobody I found would act upon
it.
[RR] Understandable until such metrics are "actionable", and that's "up to
us to find/define/figure out" it seems to me. Metrics that are not
actionable are write-only memory and good for little but historical
record:-)
The tool produces the full histograms so no information is really
missing except for maybe better time series analysis.
[RR] Isn't that in fact what we are trying to extract from the e2e stats we
collect? i.e., infer the time evolution of the system from its I/O
behavior? As you point out, it's really hard to do without probes in the
guts of the system, nd yes, synchronization is important :-)
The open source flows python code also released with iperf 2 does use
the komogorov-smirnov distances & distance matrices to cluster when the
number of histograms are just too much. We've analyzed 1M runs to fault
isolate the "unexpected interruptions" or "bugs" and without statistical
support it is just not doable. This does require instrumentation of the
full path with mapping to a common clock domain (e.g. GPS) and not just
e2e stats. I find an e2e complaint by an end user about "poor speed" as
useful as telling a pharmacist I have a fever. Not much diagnostically
is going on. Take an aspirin.
[RR] That's AWESOME!!!!!!!!!!!!!!!!!! I love that analogy!
RR
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py
Bob
> See below .
>
> -----Original Message-----
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
> Behalf Of rjmcmahon via Starlink
> Sent: Friday, January 6, 2023 12:39 PM
> To: MORTON JR., AL
> Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
> bloat
> Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
> cloudflare'schristmas present
>
> Some thoughts are not to use UDP for testing here. Also, these speed
>
> tests have little to no information for network engineers about what's
>
>
> going on. Iperf 2 may better assist network engineers but then I'm
>
> biased ;)
>
> Running iperf 2 https://sourceforge.net/projects/iperf2/ with
>
> --trip-times. Though the sampling and central limit theorem averaging
> is
>
> hiding the real distributions (use --histograms to get those)
>
> _[RR] FWIW (IMNBWM __J). If the output/final histograms indicate the
> PDF is NOT Gaussian, then any application of the CLT is
> inappropriate/contra-indicated! The CLT is a "proof under certain
> regularity conditions/assumptions of underlying/constituent PDFs, that
> the resulting PDF (after all the necessary convolutions are performed
> to get to the PDF of the output) will asymptotically approach a
> Gaussian with only a mean and a std. dev. left to specify. _
>
> Below are 4 parallel TCP streams from my home to one of my servers in
>
> the cloud. First where TCP is limited per CCA. Second is source side
>
> write rate limiting. Things to note:
>
> o) connect times for both at 10-15 ms
>
> o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
>
> Source side pacing eliminates retries
>
> o) Fairness with CCA isn't great but quite good with source side write
>
>
> pacing
>
> o) Queue depth with CCA is about 150 Kbytes about 100K byte with
> source
>
> side pacing
>
> o) min write to read is about 80 ms for both
>
> o) max is 220 ms vs 97 ms
>
> o) stdev for CCA write/read is 30 ms vs 3 ms
>
> o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
>
> TCP_QUICACK and TCP_NODELAY are both enabled.
>
> [ CT] final connect times (min/avg/max/stdev) =
>
> 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
>
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
>
> ------------------------------------------------------------
>
> Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
>
> flows)
>
> Write buffer size: 131072 Byte
>
> TOS set to 0x0 and nodelay (Nagle off)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> Event based writes (pending queue watermark at 16384 bytes)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>
> (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>
> (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>
> (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>
> (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
>
> (PST)
>
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
>
> Cwnd/RTT(var) NetPwr
>
> ...
>
> [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
>
>
> 29K/21088(1142) us 68.37
>
> [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
>
>
> 31K/19284(612) us 88.36
>
> [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>
> 16K/18996(658) us 48.30
>
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
>
> 18K/18133(208) us 57.83
>
> [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
>
> [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
>
>
> 29K/14717(489) us 89.06
>
> [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
>
> 16K/15874(408) us 66.06
>
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>
> 16K/15826(382) us 74.54
>
> [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
>
>
> 9K/14878(557) us 106
>
> [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
>
> [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>
>
> 25K/15472(496) us 119
>
> [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
>
> 26K/16417(427) us 63.87
>
> [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
>
>
> 16K/16268(679) us 80.57
>
> [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
>
> 15K/16629(799) us 63.06
>
> [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
>
> [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>
>
> 22K/13986(519) us 131
>
> [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>
> 16K/12679(377) us 93.04
>
> [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>
> 14K/12971(367) us 70.74
>
> [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
>
> 15K/14740(779) us 80.03
>
> [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001 with pid 233615
>
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42480
>
> (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
>
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42502
>
> (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
>
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42484
>
> (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
>
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42488
>
> (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
>
> [ ID] Interval Transfer Bandwidth Burst Latency
>
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>
> ...
>
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
>
> 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
>
> 467=461:6:0:0:0:0:0:0
>
> [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
>
> 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
>
> 646=643:1:2:0:0:0:0:0
>
> [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
>
> 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
>
> 619=617:1:1:0:0:0:0:0
>
> [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
>
> 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
>
> 409=404:5:0:0:0:0:0:0
>
> [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
>
> 2141=2125:13:3:0:0:0:0:0
>
> [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
>
> 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
>
> 511=506:2:3:0:0:0:0:0
>
> [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
>
> 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
>
> 487=484:2:1:0:0:0:0:0
>
> [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
>
> 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
>
> 640=636:1:3:0:0:0:0:0
>
> [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
>
> 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
>
> 415=413:2:0:0:0:0:0:0
>
> [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
>
> 2053=2039:7:7:0:0:0:0:0
>
> [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
>
> 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
>
> 711=707:3:1:0:0:0:0:0
>
> [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
>
> 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
>
> 494=488:5:1:0:0:0:0:0
>
> [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
>
> 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
>
> 408=403:5:0:0:0:0:0:0
>
> [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
>
> 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
>
> 484=480:4:0:0:0:0:0:0
>
> [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
>
> 2097=2078:17:2:0:0:0:0:0
>
> [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
>
> 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
>
> 692=687:2:3:0:0:0:0:0
>
> [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
>
> 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
>
> 429=425:4:0:0:0:0:0:0
>
> [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
>
> 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
>
> 520=516:4:0:0:0:0:0:0
>
> [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
>
> 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
>
> 527=524:1:2:0:0:0:0:0
>
> [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
>
> 2168=2152:11:5:0:0:0:0:0
>
> With source side rate limiting to 9 mb/s per stream.
>
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
>
> ------------------------------------------------------------
>
> Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
>
> flows)
>
> Write buffer size: 131072 Byte
>
> TOS set to 0x0 and nodelay (Nagle off)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> Event based writes (pending queue watermark at 16384 bytes)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>
> (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>
> (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>
> (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
>
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>
> (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
>
> (PST)
>
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
>
> Cwnd/RTT(var) NetPwr
>
> ...
>
> [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/88055(12329) us 11.91
>
> [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 132K/74867(11755) us 14.01
>
> [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/89101(13134) us 11.77
>
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 131K/91451(11938) us 11.47
>
> [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/85135(14580) us 13.86
>
> [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 132K/85124(15654) us 13.86
>
> [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/91336(11335) us 12.92
>
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 131K/89185(13499) us 13.23
>
> [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>
> [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/85687(13489) us 13.77
>
> [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 132K/82803(13001) us 14.25
>
> [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 134K/86869(15186) us 13.58
>
> [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> 131K/91447(12515) us 12.90
>
> [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>
> [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/81814(13168) us 12.82
>
> [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 132K/89008(13283) us 11.78
>
> [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 134K/89494(12151) us 11.72
>
> [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> 131K/91083(12797) us 11.51
>
> [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>
> ------------------------------------------------------------
>
> Server listening on TCP port 5001 with pid 233981
>
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> ------------------------------------------------------------
>
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46448
>
> (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
>
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46454
>
> (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
>
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46460
>
> (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
>
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46458
>
> (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>
> (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
>
> [ ID] Interval Transfer Bandwidth Burst Latency
>
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>
> [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>
> 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
>
> 451=451:0:0:0:0:0:0:0
>
> [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
>
> 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
>
> 472=472:0:0:0:0:0:0:0
>
> [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
>
> 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
>
> 495=495:0:0:0:0:0:0:0
>
> [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>
> 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
>
> 489=489:0:0:0:0:0:0:0
>
> [ ID] Interval Transfer Bandwidth Reads=Dist
>
> [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
>
> 1907=1907:0:0:0:0:0:0:0
>
> [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
>
> 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
>
> 492=492:0:0:0:0:0:0:0
>
> [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
>
> 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
>
> 480=480:0:0:0:0:0:0:0
>
> [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
>
> 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
>
> 513=513:0:0:0:0:0:0:0
>
> [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
>
> 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
>
> 489=489:0:0:0:0:0:0:0
>
> [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
>
> 1974=1974:0:0:0:0:0:0:0
>
> [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
>
> 497=497:0:0:0:0:0:0:0
>
> [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
>
> 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
>
> 503=503:0:0:0:0:0:0:0
>
> [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
>
> 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
>
> 509=509:0:0:0:0:0:0:0
>
> [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
>
> 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
>
> 509=509:0:0:0:0:0:0:0
>
> [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
>
> 2018=2018:0:0:0:0:0:0:0
>
> [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
>
> 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
>
> 487=487:0:0:0:0:0:0:0
>
> [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
>
> 489=489:0:0:0:0:0:0:0
>
> [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
>
> 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
>
> 493=493:0:0:0:0:0:0:0
>
> [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
>
> 493=493:0:0:0:0:0:0:0
>
> [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
>
> 1962=1962:0:0:0:0:0:0:0
>
> [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
>
> 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
>
> 498=498:0:0:0:0:0:0:0
>
> [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
>
> 490=490:0:0:0:0:0:0:0
>
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
>
> 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
>
> 481=481:0:0:0:0:0:0:0
>
> [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
>
> 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
>
> 484=484:0:0:0:0:0:0:0
>
> [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
>
> 1953=1953:0:0:0:0:0:0:0
>
> [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
>
> 467=467:0:0:0:0:0:0:0
>
> [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
>
> 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
>
> 478=478:0:0:0:0:0:0:0
>
> [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
>
> 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
>
> 491=491:0:0:0:0:0:0:0
>
> [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
>
> 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
>
> 488=488:0:0:0:0:0:0:0
>
> [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
>
> 1924=1924:0:0:0:0:0:0:0
>
> [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
>
> 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
>
> 485=485:0:0:0:0:0:0:0
>
> [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
>
> 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
>
> 472=472:0:0:0:0:0:0:0
>
> [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
>
> 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
>
> 477=477:0:0:0:0:0:0:0
>
> [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
>
> 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
>
> 494=494:0:0:0:0:0:0:0
>
> [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
>
> 1928=1928:0:0:0:0:0:0:0
>
> [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
>
> 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
>
> 510=510:0:0:0:0:0:0:0
>
> [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
>
> 480=480:0:0:0:0:0:0:0
>
> [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
>
> 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
>
> 515=515:0:0:0:0:0:0:0
>
> [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
>
> 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
>
> 508=508:0:0:0:0:0:0:0
>
> [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
>
> 2013=2013:0:0:0:0:0:0:0
>
> Bob
>
>>> -----Original Message-----
>
>>
>
>>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf
> Of
>
>> Dave Taht
>
>>
>
>>> via LibreQoS
>
>>
>
>>> Sent: Wednesday, January 4, 2023 12:26 PM
>
>>
>
>>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
>
>>
>
>>>
>
>>
>
>>> Please try the new, the shiny, the really wonderful test here:
>
>>
>
>>>
>
>>
>
https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU
9S
>
>
>> [1]
>
>>
>
>>>
>
>>
>
9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw
$
>
>
>> [1]
>
>>
>
>>>
>
>>
>
>>> I would really appreciate some independent verification of
>
>>
>
>>> measurements using this tool. In my brief experiments it appears -
>
>> as
>
>>
>
>>> all the commercial tools to date - to dramatically understate the
>
>>
>
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>
>>
>
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
>>
>
>> [acm]
>
>>
>
>> Hi Dave, I made some time to test "cloudflare's christmas present"
>
>> yesterday.
>
>>
>
>> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a
> "turbo"
>
>> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>
>> 23Mbps.
>
>>
>
>> When I saw the ~620Mbps Downstream measurement, I was ready to
>
>> complain that even the IP-Layer Capacity was grossly underestimated.
>
>
>> In addition, the Latency measurements seem very low (as you
> asserted),
>
>> although the cloud server was "nearby".
>
>>
>
>> However, I found that Ookla and the ISP-provided measurement were
> also
>
>> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>
>> throughput?) measurement was consistent with others. Our UDPST
> server
>
>> was unreachable, otherwise I would have added that measurement, too.
>
>
>>
>
>> The Upstream measurement graph seems to illustrate the "turbo"
>
>> mode, with the dip after attaining 44.5Mbps.
>
>>
>
>> UDPST saturates the uplink and we measure the full 250ms of the
>
>> Upstream buffer. Cloudflare's latency measurements don't even come
>
>> close.
>
>>
>
>> Al
>
>>
>
>>
>
>>
>
>> Links:
>
>> ------
>
>> [1]
>
>>
>
https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9
S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKV
w$
>
>
>> _______________________________________________
>
>> Rpm mailing list
>
>> Rpm@lists.bufferbloat.net
>
>> https://lists.bufferbloat.net/listinfo/rpm
>
> _______________________________________________
>
> Starlink mailing list
>
> Starlink@lists.bufferbloat.net
>
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 149688 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-07 0:31 ` Dick Roy
@ 2023-01-10 17:25 ` Luis A. Cornejo
2023-01-11 5:07 ` [Cake] [Rpm] [Bloat] [Starlink] " Dave Taht
0 siblings, 1 reply; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-10 17:25 UTC (permalink / raw)
To: dickroy
Cc: rjmcmahon, Rpm, MORTON JR., AL, IETF IPPM WG, libreqos, Cake List, bloat
[-- Attachment #1.1: Type: text/plain, Size: 30248 bytes --]
Here is my VZ HSI
No SQMm on
On Sat, Jan 7, 2023 at 6:38 PM Dick Roy via Bloat <
bloat@lists.bufferbloat.net> wrote:
>
>
>
>
> -----Original Message-----
> From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com]
> Sent: Friday, January 6, 2023 3:45 PM
> To: dickroy@alum.mit.edu
> Cc: 'MORTON JR., AL'; 'IETF IPPM WG'; 'libreqos'; 'Cake List'; 'Rpm';
> 'bloat'
> Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
> cloudflare'schristmas present
>
>
>
> yeah, I'd prefer not to output CLT sample groups at all but the
>
> histograms aren't really human readable and users constantly ask for
>
> them. I thought about providing a distance from the gaussian as output
>
> too but so far few would understand it and nobody I found would act upon
>
> it.
>
> *[RR] Understandable until such metrics are “actionable”, and that’s “up
> to us to find/define/figure out” it seems to me. Metrics that are not
> actionable are write-only memory and good for little but historical record**J
> *
>
> The tool produces the full histograms so no information is really
>
> missing except for maybe better time series analysis.
>
> *[RR] Isn’t that in fact what we are trying to extract from the e2e stats
> we collect? i.e., infer the time evolution of the system from its I/O
> behavior? As you point out, it’s really hard to do without probes in the
> guts of the system, nd yes, synchronization is important **J*
>
>
>
> The open source flows python code also released with iperf 2 does use
>
> the komogorov-smirnov distances & distance matrices to cluster when the
>
> number of histograms are just too much. We've analyzed 1M runs to fault
>
> isolate the "unexpected interruptions" or "bugs" and without statistical
>
> support it is just not doable. This does require instrumentation of the
>
> full path with mapping to a common clock domain (e.g. GPS) and not just
>
> e2e stats. I find an e2e complaint by an end user about "poor speed" as
>
> useful as telling a pharmacist I have a fever. Not much diagnostically
>
> is going on. Take an aspirin.
>
> *[RR] That’s AWESOME!!!!!!!!!!!!!!!!!! I love that analogy!*
>
>
>
> *RR*
>
>
>
> https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
>
> https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py
>
>
>
> Bob
>
> > See below …
>
> >
>
> > -----Original Message-----
>
> > From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
>
> > Behalf Of rjmcmahon via Starlink
>
> > Sent: Friday, January 6, 2023 12:39 PM
>
> > To: MORTON JR., AL
>
> > Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
>
> > bloat
>
> > Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
>
> > cloudflare'schristmas present
>
> >
>
> > Some thoughts are not to use UDP for testing here. Also, these speed
>
> >
>
> > tests have little to no information for network engineers about what's
>
> >
>
> >
>
> > going on. Iperf 2 may better assist network engineers but then I'm
>
> >
>
> > biased ;)
>
> >
>
> > Running iperf 2 https://sourceforge.net/projects/iperf2/ with
>
> >
>
> > --trip-times. Though the sampling and central limit theorem averaging
>
> > is
>
> >
>
> > hiding the real distributions (use --histograms to get those)
>
> >
>
> > _[RR] FWIW (IMNBWM __J)… If the output/final histograms indicate the
>
> > PDF is NOT Gaussian, then any application of the CLT is
>
> > inappropriate/contra-indicated! The CLT is a "proof under certain
>
> > regularity conditions/assumptions of underlying/constituent PDFs, that
>
> > the resulting PDF (after all the necessary convolutions are performed
>
> > to get to the PDF of the output) will asymptotically approach a
>
> > Gaussian with only a mean and a std. dev. left to specify. _
>
> >
>
> > Below are 4 parallel TCP streams from my home to one of my servers in
>
> >
>
> > the cloud. First where TCP is limited per CCA. Second is source side
>
> >
>
> > write rate limiting. Things to note:
>
> >
>
> > o) connect times for both at 10-15 ms
>
> >
>
> > o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
>
> >
>
> > Source side pacing eliminates retries
>
> >
>
> > o) Fairness with CCA isn't great but quite good with source side write
>
> >
>
> >
>
> > pacing
>
> >
>
> > o) Queue depth with CCA is about 150 Kbytes about 100K byte with
>
> > source
>
> >
>
> > side pacing
>
> >
>
> > o) min write to read is about 80 ms for both
>
> >
>
> > o) max is 220 ms vs 97 ms
>
> >
>
> > o) stdev for CCA write/read is 30 ms vs 3 ms
>
> >
>
> > o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
>
> >
>
> > TCP_QUICACK and TCP_NODELAY are both enabled.
>
> >
>
> > [ CT] final connect times (min/avg/max/stdev) =
>
> >
>
> > 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
>
> >
>
> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>
> >
>
> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
>
> >
>
> > flows)
>
> >
>
> > Write buffer size: 131072 Byte
>
> >
>
> > TOS set to 0x0 and nodelay (Nagle off)
>
> >
>
> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> >
>
> > Event based writes (pending queue watermark at 16384 bytes)
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
>
> >
>
> > (PST)
>
> >
>
> > [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
>
> >
>
> > (PST)
>
> >
>
> > [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
>
> >
>
> > (PST)
>
> >
>
> > [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
>
> >
>
> > (PST)
>
> >
>
> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
>
> >
>
> > Cwnd/RTT(var) NetPwr
>
> >
>
> > ...
>
> >
>
> > [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
>
> >
>
> >
>
> > 29K/21088(1142) us 68.37
>
> >
>
> > [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
>
> >
>
> >
>
> > 31K/19284(612) us 88.36
>
> >
>
> > [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>
> >
>
> > 16K/18996(658) us 48.30
>
> >
>
> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
>
> >
>
> > 18K/18133(208) us 57.83
>
> >
>
> > [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
>
> >
>
> > [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
>
> >
>
> >
>
> > 29K/14717(489) us 89.06
>
> >
>
> > [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
>
> >
>
> > 16K/15874(408) us 66.06
>
> >
>
> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>
> >
>
> > 16K/15826(382) us 74.54
>
> >
>
> > [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
>
> >
>
> >
>
> > 9K/14878(557) us 106
>
> >
>
> > [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
>
> >
>
> > [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>
> >
>
> >
>
> > 25K/15472(496) us 119
>
> >
>
> > [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
>
> >
>
> > 26K/16417(427) us 63.87
>
> >
>
> > [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
>
> >
>
> >
>
> > 16K/16268(679) us 80.57
>
> >
>
> > [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
>
> >
>
> > 15K/16629(799) us 63.06
>
> >
>
> > [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
>
> >
>
> > [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>
> >
>
> >
>
> > 22K/13986(519) us 131
>
> >
>
> > [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>
> >
>
> > 16K/12679(377) us 93.04
>
> >
>
> > [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>
> >
>
> > 14K/12971(367) us 70.74
>
> >
>
> > [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
>
> >
>
> > 15K/14740(779) us 80.03
>
> >
>
> > [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>
> >
>
> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > Server listening on TCP port 5001 with pid 233615
>
> >
>
> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>
> >
>
> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 42480
>
> >
>
> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
>
> >
>
> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 42502
>
> >
>
> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
>
> >
>
> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 42484
>
> >
>
> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
>
> >
>
> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 42488
>
> >
>
> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
>
> >
>
> > [ ID] Interval Transfer Bandwidth Burst Latency
>
> >
>
> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>
> >
>
> > ...
>
> >
>
> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
>
> >
>
> > 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
>
> >
>
> > 467=461:6:0:0:0:0:0:0
>
> >
>
> > [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
>
> >
>
> > 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
>
> >
>
> > 646=643:1:2:0:0:0:0:0
>
> >
>
> > [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
>
> >
>
> > 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
>
> >
>
> > 619=617:1:1:0:0:0:0:0
>
> >
>
> > [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
>
> >
>
> > 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
>
> >
>
> > 409=404:5:0:0:0:0:0:0
>
> >
>
> > [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
>
> >
>
> > 2141=2125:13:3:0:0:0:0:0
>
> >
>
> > [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
>
> >
>
> > 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
>
> >
>
> > 511=506:2:3:0:0:0:0:0
>
> >
>
> > [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
>
> >
>
> > 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
>
> >
>
> > 487=484:2:1:0:0:0:0:0
>
> >
>
> > [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
>
> >
>
> > 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
>
> >
>
> > 640=636:1:3:0:0:0:0:0
>
> >
>
> > [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
>
> >
>
> > 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
>
> >
>
> > 415=413:2:0:0:0:0:0:0
>
> >
>
> > [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
>
> >
>
> > 2053=2039:7:7:0:0:0:0:0
>
> >
>
> > [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
>
> >
>
> > 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
>
> >
>
> > 711=707:3:1:0:0:0:0:0
>
> >
>
> > [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
>
> >
>
> > 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
>
> >
>
> > 494=488:5:1:0:0:0:0:0
>
> >
>
> > [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
>
> >
>
> > 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
>
> >
>
> > 408=403:5:0:0:0:0:0:0
>
> >
>
> > [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
>
> >
>
> > 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
>
> >
>
> > 484=480:4:0:0:0:0:0:0
>
> >
>
> > [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
>
> >
>
> > 2097=2078:17:2:0:0:0:0:0
>
> >
>
> > [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
>
> >
>
> > 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
>
> >
>
> > 692=687:2:3:0:0:0:0:0
>
> >
>
> > [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
>
> >
>
> > 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
>
> >
>
> > 429=425:4:0:0:0:0:0:0
>
> >
>
> > [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
>
> >
>
> > 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
>
> >
>
> > 520=516:4:0:0:0:0:0:0
>
> >
>
> > [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
>
> >
>
> > 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
>
> >
>
> > 527=524:1:2:0:0:0:0:0
>
> >
>
> > [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
>
> >
>
> > 2168=2152:11:5:0:0:0:0:0
>
> >
>
> > With source side rate limiting to 9 mb/s per stream.
>
> >
>
> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>
> >
>
> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
>
> >
>
> > flows)
>
> >
>
> > Write buffer size: 131072 Byte
>
> >
>
> > TOS set to 0x0 and nodelay (Nagle off)
>
> >
>
> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> >
>
> > Event based writes (pending queue watermark at 16384 bytes)
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
>
> >
>
> > (PST)
>
> >
>
> > [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
>
> >
>
> > (PST)
>
> >
>
> > [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
>
> >
>
> > (PST)
>
> >
>
> > [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
>
> >
>
> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
>
> >
>
> > (PST)
>
> >
>
> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
>
> >
>
> > Cwnd/RTT(var) NetPwr
>
> >
>
> > ...
>
> >
>
> > [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 134K/88055(12329) us 11.91
>
> >
>
> > [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 132K/74867(11755) us 14.01
>
> >
>
> > [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 134K/89101(13134) us 11.77
>
> >
>
> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 131K/91451(11938) us 11.47
>
> >
>
> > [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> >
>
> > [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 134K/85135(14580) us 13.86
>
> >
>
> > [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 132K/85124(15654) us 13.86
>
> >
>
> > [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 134K/91336(11335) us 12.92
>
> >
>
> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 131K/89185(13499) us 13.23
>
> >
>
> > [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>
> >
>
> > [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 134K/85687(13489) us 13.77
>
> >
>
> > [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 132K/82803(13001) us 14.25
>
> >
>
> > [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 134K/86869(15186) us 13.58
>
> >
>
> > [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>
> >
>
> > 131K/91447(12515) us 12.90
>
> >
>
> > [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>
> >
>
> > [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 134K/81814(13168) us 12.82
>
> >
>
> > [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 132K/89008(13283) us 11.78
>
> >
>
> > [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 134K/89494(12151) us 11.72
>
> >
>
> > [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>
> >
>
> > 131K/91083(12797) us 11.51
>
> >
>
> > [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> >
>
> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > Server listening on TCP port 5001 with pid 233981
>
> >
>
> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>
> >
>
> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>
> >
>
> > ------------------------------------------------------------
>
> >
>
> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 46448
>
> >
>
> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
>
> >
>
> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 46454
>
> >
>
> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
>
> >
>
> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 46460
>
> >
>
> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
>
> >
>
> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>
> > 46458
>
> >
>
> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>
> >
>
> > (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
>
> >
>
> > [ ID] Interval Transfer Bandwidth Burst Latency
>
> >
>
> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>
> >
>
> > [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>
> >
>
> > 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
>
> >
>
> > 451=451:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
>
> >
>
> > 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
>
> >
>
> > 472=472:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
>
> >
>
> > 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
>
> >
>
> > 495=495:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>
> >
>
> > 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
>
> >
>
> > 489=489:0:0:0:0:0:0:0
>
> >
>
> > [ ID] Interval Transfer Bandwidth Reads=Dist
>
> >
>
> > [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
>
> >
>
> > 1907=1907:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
>
> >
>
> > 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
>
> >
>
> > 492=492:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
>
> >
>
> > 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
>
> >
>
> > 480=480:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
>
> >
>
> > 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
>
> >
>
> > 513=513:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
>
> >
>
> > 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
>
> >
>
> > 489=489:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
>
> >
>
> > 1974=1974:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> >
>
> > 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
>
> >
>
> > 497=497:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
>
> >
>
> > 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
>
> >
>
> > 503=503:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
>
> >
>
> > 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
>
> >
>
> > 509=509:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
>
> >
>
> > 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
>
> >
>
> > 509=509:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
>
> >
>
> > 2018=2018:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
>
> >
>
> > 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
>
> >
>
> > 487=487:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> >
>
> > 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
>
> >
>
> > 489=489:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
>
> >
>
> > 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
>
> >
>
> > 493=493:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> >
>
> > 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
>
> >
>
> > 493=493:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
>
> >
>
> > 1962=1962:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
>
> >
>
> > 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
>
> >
>
> > 498=498:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
>
> >
>
> > 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
>
> >
>
> > 490=490:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
>
> >
>
> > 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
>
> >
>
> > 481=481:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
>
> >
>
> > 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
>
> >
>
> > 484=484:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
>
> >
>
> > 1953=1953:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> >
>
> > 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
>
> >
>
> > 467=467:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
>
> >
>
> > 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
>
> >
>
> > 478=478:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
>
> >
>
> > 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
>
> >
>
> > 491=491:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
>
> >
>
> > 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
>
> >
>
> > 488=488:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
>
> >
>
> > 1924=1924:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
>
> >
>
> > 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
>
> >
>
> > 485=485:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
>
> >
>
> > 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
>
> >
>
> > 472=472:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
>
> >
>
> > 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
>
> >
>
> > 477=477:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
>
> >
>
> > 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
>
> >
>
> > 494=494:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
>
> >
>
> > 1928=1928:0:0:0:0:0:0:0
>
> >
>
> > [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
>
> >
>
> > 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
>
> >
>
> > 510=510:0:0:0:0:0:0:0
>
> >
>
> > [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
>
> >
>
> > 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
>
> >
>
> > 480=480:0:0:0:0:0:0:0
>
> >
>
> > [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
>
> >
>
> > 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
>
> >
>
> > 515=515:0:0:0:0:0:0:0
>
> >
>
> > [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
>
> >
>
> > 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
>
> >
>
> > 508=508:0:0:0:0:0:0:0
>
> >
>
> > [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
>
> >
>
> > 2013=2013:0:0:0:0:0:0:0
>
> >
>
> > Bob
>
> >
>
> >>> -----Original Message-----
>
> >
>
> >>
>
> >
>
> >>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf
>
> > Of
>
> >
>
> >> Dave Taht
>
> >
>
> >>
>
> >
>
> >>> via LibreQoS
>
> >
>
> >>
>
> >
>
> >>> Sent: Wednesday, January 4, 2023 12:26 PM
>
> >
>
> >>
>
> >
>
> >>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
> >
>
> >
>
> >>
>
> >
>
> >>>
>
> >
>
> >>
>
> >
>
> >>> Please try the new, the shiny, the really wonderful test here:
>
> >
>
> >>
>
> >
>
> >>>
>
> >
>
> >>
>
> >
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
>
> >
>
> >
>
> >> [1]
>
> >
>
> >>
>
> >
>
> >>>
>
> >
>
> >>
>
> >
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>
> >
>
> >
>
> >> [1]
>
> >
>
> >>
>
> >
>
> >>>
>
> >
>
> >>
>
> >
>
> >>> I would really appreciate some independent verification of
>
> >
>
> >>
>
> >
>
> >>> measurements using this tool. In my brief experiments it appears -
>
> >
>
> >> as
>
> >
>
> >>
>
> >
>
> >>> all the commercial tools to date - to dramatically understate the
>
> >
>
> >>
>
> >
>
> >>> bufferbloat, on my LTE, (and my starlink terminal is out being
>
> >
>
> >>
>
> >
>
> >>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
> >
>
> >>
>
> >
>
> >> [acm]
>
> >
>
> >>
>
> >
>
> >> Hi Dave, I made some time to test "cloudflare's christmas present"
>
> >
>
> >> yesterday.
>
> >
>
> >>
>
> >
>
> >> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a
>
> > "turbo"
>
> >
>
> >> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>
> >
>
> >> 23Mbps.
>
> >
>
> >>
>
> >
>
> >> When I saw the ~620Mbps Downstream measurement, I was ready to
>
> >
>
> >> complain that even the IP-Layer Capacity was grossly underestimated.
>
> >
>
> >
>
> >> In addition, the Latency measurements seem very low (as you
>
> > asserted),
>
> >
>
> >> although the cloud server was "nearby".
>
> >
>
> >>
>
> >
>
> >> However, I found that Ookla and the ISP-provided measurement were
>
> > also
>
> >
>
> >> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>
> >
>
> >> throughput?) measurement was consistent with others. Our UDPST
>
> > server
>
> >
>
> >> was unreachable, otherwise I would have added that measurement, too.
>
> >
>
> >
>
> >>
>
> >
>
> >> The Upstream measurement graph seems to illustrate the "turbo"
>
> >
>
> >> mode, with the dip after attaining 44.5Mbps.
>
> >
>
> >>
>
> >
>
> >> UDPST saturates the uplink and we measure the full 250ms of the
>
> >
>
> >> Upstream buffer. Cloudflare's latency measurements don't even come
>
> >
>
> >> close.
>
> >
>
> >>
>
> >
>
> >> Al
>
> >
>
> >>
>
> >
>
> >>
>
> >
>
> >>
>
> >
>
> >> Links:
>
> >
>
> >> ------
>
> >
>
> >> [1]
>
> >
>
> >>
>
> >
> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>
> >
>
> >
>
> >> _______________________________________________
>
> >
>
> >> Rpm mailing list
>
> >
>
> >> Rpm@lists.bufferbloat.net
>
> >
>
> >> https://lists.bufferbloat.net/listinfo/rpm
>
> >
>
> > _______________________________________________
>
> >
>
> > Starlink mailing list
>
> >
>
> > Starlink@lists.bufferbloat.net
>
> >
>
> > https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #1.2: Type: text/html, Size: 171406 bytes --]
[-- Attachment #2: Screenshot 2023-01-10 at 11-19-57 Internet Speed Test - Measure Network Performance Cloudflare.png --]
[-- Type: image/png, Size: 631718 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Rpm] [Bloat] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-10 17:25 ` [Cake] [Bloat] " Luis A. Cornejo
@ 2023-01-11 5:07 ` Dave Taht
2023-01-11 11:05 ` [Cake] [Bloat] [Rpm] " Jay Moran
0 siblings, 1 reply; 26+ messages in thread
From: Dave Taht @ 2023-01-11 5:07 UTC (permalink / raw)
To: Luis A. Cornejo
Cc: dickroy, Cake List, MORTON JR., AL, IETF IPPM WG, libreqos, Rpm, bloat
Dear Luis:
You hit 17 seconds of delay on your test.
I got you beat, today, on my LTE connection, I cracked 182 seconds.
I'd like to thank Verizon for making it possible for me to spew 4000
words on my kvetches about the current speedtest regimes of speedtest,
cloudflare, and so on, by making my network connection so lousy today
that I sat in front of emacs to rant - and y'all for helping tone
down, a little, this blog entry:
https://blog.cerowrt.org/post/speedtests/
On Tue, Jan 10, 2023 at 9:25 AM Luis A. Cornejo via Rpm
<rpm@lists.bufferbloat.net> wrote:
>
> Here is my VZ HSI
>
>
> No SQMm on
>
> On Sat, Jan 7, 2023 at 6:38 PM Dick Roy via Bloat <bloat@lists.bufferbloat.net> wrote:
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com]
>> Sent: Friday, January 6, 2023 3:45 PM
>> To: dickroy@alum.mit.edu
>> Cc: 'MORTON JR., AL'; 'IETF IPPM WG'; 'libreqos'; 'Cake List'; 'Rpm'; 'bloat'
>> Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present
>>
>>
>>
>> yeah, I'd prefer not to output CLT sample groups at all but the
>>
>> histograms aren't really human readable and users constantly ask for
>>
>> them. I thought about providing a distance from the gaussian as output
>>
>> too but so far few would understand it and nobody I found would act upon
>>
>> it.
>>
>> [RR] Understandable until such metrics are “actionable”, and that’s “up to us to find/define/figure out” it seems to me. Metrics that are not actionable are write-only memory and good for little but historical recordJ
>>
>> The tool produces the full histograms so no information is really
>>
>> missing except for maybe better time series analysis.
>>
>> [RR] Isn’t that in fact what we are trying to extract from the e2e stats we collect? i.e., infer the time evolution of the system from its I/O behavior? As you point out, it’s really hard to do without probes in the guts of the system, nd yes, synchronization is important J
>>
>>
>>
>> The open source flows python code also released with iperf 2 does use
>>
>> the komogorov-smirnov distances & distance matrices to cluster when the
>>
>> number of histograms are just too much. We've analyzed 1M runs to fault
>>
>> isolate the "unexpected interruptions" or "bugs" and without statistical
>>
>> support it is just not doable. This does require instrumentation of the
>>
>> full path with mapping to a common clock domain (e.g. GPS) and not just
>>
>> e2e stats. I find an e2e complaint by an end user about "poor speed" as
>>
>> useful as telling a pharmacist I have a fever. Not much diagnostically
>>
>> is going on. Take an aspirin.
>>
>> [RR] That’s AWESOME!!!!!!!!!!!!!!!!!! I love that analogy!
>>
>>
>>
>> RR
>>
>>
>>
>> https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
>>
>> https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py
>>
>>
>>
>> Bob
>>
>> > See below …
>>
>> >
>>
>> > -----Original Message-----
>>
>> > From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
>>
>> > Behalf Of rjmcmahon via Starlink
>>
>> > Sent: Friday, January 6, 2023 12:39 PM
>>
>> > To: MORTON JR., AL
>>
>> > Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
>>
>> > bloat
>>
>> > Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
>>
>> > cloudflare'schristmas present
>>
>> >
>>
>> > Some thoughts are not to use UDP for testing here. Also, these speed
>>
>> >
>>
>> > tests have little to no information for network engineers about what's
>>
>> >
>>
>> >
>>
>> > going on. Iperf 2 may better assist network engineers but then I'm
>>
>> >
>>
>> > biased ;)
>>
>> >
>>
>> > Running iperf 2 https://sourceforge.net/projects/iperf2/ with
>>
>> >
>>
>> > --trip-times. Though the sampling and central limit theorem averaging
>>
>> > is
>>
>> >
>>
>> > hiding the real distributions (use --histograms to get those)
>>
>> >
>>
>> > _[RR] FWIW (IMNBWM __J)… If the output/final histograms indicate the
>>
>> > PDF is NOT Gaussian, then any application of the CLT is
>>
>> > inappropriate/contra-indicated! The CLT is a "proof under certain
>>
>> > regularity conditions/assumptions of underlying/constituent PDFs, that
>>
>> > the resulting PDF (after all the necessary convolutions are performed
>>
>> > to get to the PDF of the output) will asymptotically approach a
>>
>> > Gaussian with only a mean and a std. dev. left to specify. _
>>
>> >
>>
>> > Below are 4 parallel TCP streams from my home to one of my servers in
>>
>> >
>>
>> > the cloud. First where TCP is limited per CCA. Second is source side
>>
>> >
>>
>> > write rate limiting. Things to note:
>>
>> >
>>
>> > o) connect times for both at 10-15 ms
>>
>> >
>>
>> > o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
>>
>> >
>>
>> > Source side pacing eliminates retries
>>
>> >
>>
>> > o) Fairness with CCA isn't great but quite good with source side write
>>
>> >
>>
>> >
>>
>> > pacing
>>
>> >
>>
>> > o) Queue depth with CCA is about 150 Kbytes about 100K byte with
>>
>> > source
>>
>> >
>>
>> > side pacing
>>
>> >
>>
>> > o) min write to read is about 80 ms for both
>>
>> >
>>
>> > o) max is 220 ms vs 97 ms
>>
>> >
>>
>> > o) stdev for CCA write/read is 30 ms vs 3 ms
>>
>> >
>>
>> > o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
>>
>> >
>>
>> > TCP_QUICACK and TCP_NODELAY are both enabled.
>>
>> >
>>
>> > [ CT] final connect times (min/avg/max/stdev) =
>>
>> >
>>
>> > 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
>>
>> >
>>
>> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>>
>> >
>>
>> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
>>
>> >
>>
>> > flows)
>>
>> >
>>
>> > Write buffer size: 131072 Byte
>>
>> >
>>
>> > TOS set to 0x0 and nodelay (Nagle off)
>>
>> >
>>
>> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>>
>> >
>>
>> > Event based writes (pending queue watermark at 16384 bytes)
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
>>
>> >
>>
>> > Cwnd/RTT(var) NetPwr
>>
>> >
>>
>> > ...
>>
>> >
>>
>> > [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
>>
>> >
>>
>> >
>>
>> > 29K/21088(1142) us 68.37
>>
>> >
>>
>> > [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
>>
>> >
>>
>> >
>>
>> > 31K/19284(612) us 88.36
>>
>> >
>>
>> > [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>>
>> >
>>
>> > 16K/18996(658) us 48.30
>>
>> >
>>
>> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
>>
>> >
>>
>> > 18K/18133(208) us 57.83
>>
>> >
>>
>> > [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
>>
>> >
>>
>> > [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
>>
>> >
>>
>> >
>>
>> > 29K/14717(489) us 89.06
>>
>> >
>>
>> > [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
>>
>> >
>>
>> > 16K/15874(408) us 66.06
>>
>> >
>>
>> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>>
>> >
>>
>> > 16K/15826(382) us 74.54
>>
>> >
>>
>> > [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
>>
>> >
>>
>> >
>>
>> > 9K/14878(557) us 106
>>
>> >
>>
>> > [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
>>
>> >
>>
>> > [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>>
>> >
>>
>> >
>>
>> > 25K/15472(496) us 119
>>
>> >
>>
>> > [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
>>
>> >
>>
>> > 26K/16417(427) us 63.87
>>
>> >
>>
>> > [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
>>
>> >
>>
>> >
>>
>> > 16K/16268(679) us 80.57
>>
>> >
>>
>> > [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
>>
>> >
>>
>> > 15K/16629(799) us 63.06
>>
>> >
>>
>> > [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
>>
>> >
>>
>> > [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>>
>> >
>>
>> >
>>
>> > 22K/13986(519) us 131
>>
>> >
>>
>> > [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>>
>> >
>>
>> > 16K/12679(377) us 93.04
>>
>> >
>>
>> > [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>>
>> >
>>
>> > 14K/12971(367) us 70.74
>>
>> >
>>
>> > [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
>>
>> >
>>
>> > 15K/14740(779) us 80.03
>>
>> >
>>
>> > [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>>
>> >
>>
>> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > Server listening on TCP port 5001 with pid 233615
>>
>> >
>>
>> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>>
>> >
>>
>> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 42480
>>
>> >
>>
>> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
>>
>> >
>>
>> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 42502
>>
>> >
>>
>> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
>>
>> >
>>
>> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 42484
>>
>> >
>>
>> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
>>
>> >
>>
>> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 42488
>>
>> >
>>
>> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
>>
>> >
>>
>> > [ ID] Interval Transfer Bandwidth Burst Latency
>>
>> >
>>
>> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>>
>> >
>>
>> > ...
>>
>> >
>>
>> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
>>
>> >
>>
>> > 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
>>
>> >
>>
>> > 467=461:6:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
>>
>> >
>>
>> > 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
>>
>> >
>>
>> > 646=643:1:2:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
>>
>> >
>>
>> > 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
>>
>> >
>>
>> > 619=617:1:1:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
>>
>> >
>>
>> > 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
>>
>> >
>>
>> > 409=404:5:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
>>
>> >
>>
>> > 2141=2125:13:3:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
>>
>> >
>>
>> > 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
>>
>> >
>>
>> > 511=506:2:3:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
>>
>> >
>>
>> > 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
>>
>> >
>>
>> > 487=484:2:1:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
>>
>> >
>>
>> > 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
>>
>> >
>>
>> > 640=636:1:3:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
>>
>> >
>>
>> > 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
>>
>> >
>>
>> > 415=413:2:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
>>
>> >
>>
>> > 2053=2039:7:7:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
>>
>> >
>>
>> > 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
>>
>> >
>>
>> > 711=707:3:1:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
>>
>> >
>>
>> > 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
>>
>> >
>>
>> > 494=488:5:1:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
>>
>> >
>>
>> > 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
>>
>> >
>>
>> > 408=403:5:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
>>
>> >
>>
>> > 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
>>
>> >
>>
>> > 484=480:4:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
>>
>> >
>>
>> > 2097=2078:17:2:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
>>
>> >
>>
>> > 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
>>
>> >
>>
>> > 692=687:2:3:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
>>
>> >
>>
>> > 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
>>
>> >
>>
>> > 429=425:4:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
>>
>> >
>>
>> > 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
>>
>> >
>>
>> > 520=516:4:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
>>
>> >
>>
>> > 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
>>
>> >
>>
>> > 527=524:1:2:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
>>
>> >
>>
>> > 2168=2152:11:5:0:0:0:0:0
>>
>> >
>>
>> > With source side rate limiting to 9 mb/s per stream.
>>
>> >
>>
>> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>>
>> >
>>
>> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
>>
>> >
>>
>> > flows)
>>
>> >
>>
>> > Write buffer size: 131072 Byte
>>
>> >
>>
>> > TOS set to 0x0 and nodelay (Nagle off)
>>
>> >
>>
>> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>>
>> >
>>
>> > Event based writes (pending queue watermark at 16384 bytes)
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
>>
>> >
>>
>> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
>>
>> >
>>
>> > (PST)
>>
>> >
>>
>> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
>>
>> >
>>
>> > Cwnd/RTT(var) NetPwr
>>
>> >
>>
>> > ...
>>
>> >
>>
>> > [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 134K/88055(12329) us 11.91
>>
>> >
>>
>> > [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 132K/74867(11755) us 14.01
>>
>> >
>>
>> > [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 134K/89101(13134) us 11.77
>>
>> >
>>
>> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 131K/91451(11938) us 11.47
>>
>> >
>>
>> > [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>>
>> >
>>
>> > [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 134K/85135(14580) us 13.86
>>
>> >
>>
>> > [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 132K/85124(15654) us 13.86
>>
>> >
>>
>> > [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 134K/91336(11335) us 12.92
>>
>> >
>>
>> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 131K/89185(13499) us 13.23
>>
>> >
>>
>> > [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>>
>> >
>>
>> > [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 134K/85687(13489) us 13.77
>>
>> >
>>
>> > [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 132K/82803(13001) us 14.25
>>
>> >
>>
>> > [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 134K/86869(15186) us 13.58
>>
>> >
>>
>> > [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>>
>> >
>>
>> > 131K/91447(12515) us 12.90
>>
>> >
>>
>> > [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>>
>> >
>>
>> > [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 134K/81814(13168) us 12.82
>>
>> >
>>
>> > [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 132K/89008(13283) us 11.78
>>
>> >
>>
>> > [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 134K/89494(12151) us 11.72
>>
>> >
>>
>> > [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>>
>> >
>>
>> > 131K/91083(12797) us 11.51
>>
>> >
>>
>> > [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>>
>> >
>>
>> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > Server listening on TCP port 5001 with pid 233981
>>
>> >
>>
>> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>>
>> >
>>
>> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>>
>> >
>>
>> > ------------------------------------------------------------
>>
>> >
>>
>> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 46448
>>
>> >
>>
>> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
>>
>> >
>>
>> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 46454
>>
>> >
>>
>> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
>>
>> >
>>
>> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 46460
>>
>> >
>>
>> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
>>
>> >
>>
>> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>>
>> > 46458
>>
>> >
>>
>> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>>
>> >
>>
>> > (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
>>
>> >
>>
>> > [ ID] Interval Transfer Bandwidth Burst Latency
>>
>> >
>>
>> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>>
>> >
>>
>> > [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>>
>> >
>>
>> > 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
>>
>> >
>>
>> > 451=451:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
>>
>> >
>>
>> > 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
>>
>> >
>>
>> > 472=472:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
>>
>> >
>>
>> > 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
>>
>> >
>>
>> > 495=495:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>>
>> >
>>
>> > 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
>>
>> >
>>
>> > 489=489:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ ID] Interval Transfer Bandwidth Reads=Dist
>>
>> >
>>
>> > [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
>>
>> >
>>
>> > 1907=1907:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
>>
>> >
>>
>> > 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
>>
>> >
>>
>> > 492=492:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
>>
>> >
>>
>> > 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
>>
>> >
>>
>> > 480=480:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
>>
>> >
>>
>> > 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
>>
>> >
>>
>> > 513=513:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
>>
>> >
>>
>> > 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
>>
>> >
>>
>> > 489=489:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
>>
>> >
>>
>> > 1974=1974:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
>>
>> >
>>
>> > 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
>>
>> >
>>
>> > 497=497:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
>>
>> >
>>
>> > 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
>>
>> >
>>
>> > 503=503:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
>>
>> >
>>
>> > 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
>>
>> >
>>
>> > 509=509:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
>>
>> >
>>
>> > 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
>>
>> >
>>
>> > 509=509:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
>>
>> >
>>
>> > 2018=2018:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
>>
>> >
>>
>> > 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
>>
>> >
>>
>> > 487=487:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>>
>> >
>>
>> > 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
>>
>> >
>>
>> > 489=489:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
>>
>> >
>>
>> > 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
>>
>> >
>>
>> > 493=493:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>>
>> >
>>
>> > 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
>>
>> >
>>
>> > 493=493:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
>>
>> >
>>
>> > 1962=1962:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
>>
>> >
>>
>> > 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
>>
>> >
>>
>> > 498=498:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
>>
>> >
>>
>> > 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
>>
>> >
>>
>> > 490=490:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
>>
>> >
>>
>> > 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
>>
>> >
>>
>> > 481=481:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
>>
>> >
>>
>> > 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
>>
>> >
>>
>> > 484=484:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
>>
>> >
>>
>> > 1953=1953:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
>>
>> >
>>
>> > 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
>>
>> >
>>
>> > 467=467:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
>>
>> >
>>
>> > 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
>>
>> >
>>
>> > 478=478:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
>>
>> >
>>
>> > 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
>>
>> >
>>
>> > 491=491:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
>>
>> >
>>
>> > 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
>>
>> >
>>
>> > 488=488:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
>>
>> >
>>
>> > 1924=1924:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
>>
>> >
>>
>> > 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
>>
>> >
>>
>> > 485=485:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
>>
>> >
>>
>> > 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
>>
>> >
>>
>> > 472=472:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
>>
>> >
>>
>> > 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
>>
>> >
>>
>> > 477=477:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
>>
>> >
>>
>> > 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
>>
>> >
>>
>> > 494=494:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
>>
>> >
>>
>> > 1928=1928:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
>>
>> >
>>
>> > 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
>>
>> >
>>
>> > 510=510:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
>>
>> >
>>
>> > 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
>>
>> >
>>
>> > 480=480:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
>>
>> >
>>
>> > 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
>>
>> >
>>
>> > 515=515:0:0:0:0:0:0:0
>>
>> >
>>
>> > [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
>>
>> >
>>
>> > 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
>>
>> >
>>
>> > 508=508:0:0:0:0:0:0:0
>>
>> >
>>
>> > [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
>>
>> >
>>
>> > 2013=2013:0:0:0:0:0:0:0
>>
>> >
>>
>> > Bob
>>
>> >
>>
>> >>> -----Original Message-----
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf
>>
>> > Of
>>
>> >
>>
>> >> Dave Taht
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> via LibreQoS
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> Sent: Wednesday, January 4, 2023 12:26 PM
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>>
>> >
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>>
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> Please try the new, the shiny, the really wonderful test here:
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>>
>>
>> >
>>
>> >>
>>
>> > https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
>>
>> >
>>
>> >
>>
>> >> [1]
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>>
>>
>> >
>>
>> >>
>>
>> > 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>>
>> >
>>
>> >
>>
>> >> [1]
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>>
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> I would really appreciate some independent verification of
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> measurements using this tool. In my brief experiments it appears -
>>
>> >
>>
>> >> as
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> all the commercial tools to date - to dramatically understate the
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> [acm]
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> Hi Dave, I made some time to test "cloudflare's christmas present"
>>
>> >
>>
>> >> yesterday.
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a
>>
>> > "turbo"
>>
>> >
>>
>> >> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>>
>> >
>>
>> >> 23Mbps.
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> When I saw the ~620Mbps Downstream measurement, I was ready to
>>
>> >
>>
>> >> complain that even the IP-Layer Capacity was grossly underestimated.
>>
>> >
>>
>> >
>>
>> >> In addition, the Latency measurements seem very low (as you
>>
>> > asserted),
>>
>> >
>>
>> >> although the cloud server was "nearby".
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> However, I found that Ookla and the ISP-provided measurement were
>>
>> > also
>>
>> >
>>
>> >> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>>
>> >
>>
>> >> throughput?) measurement was consistent with others. Our UDPST
>>
>> > server
>>
>> >
>>
>> >> was unreachable, otherwise I would have added that measurement, too.
>>
>> >
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> The Upstream measurement graph seems to illustrate the "turbo"
>>
>> >
>>
>> >> mode, with the dip after attaining 44.5Mbps.
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> UDPST saturates the uplink and we measure the full 250ms of the
>>
>> >
>>
>> >> Upstream buffer. Cloudflare's latency measurements don't even come
>>
>> >
>>
>> >> close.
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> Al
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>
>>
>> >
>>
>> >>
>>
>> >
>>
>> >> Links:
>>
>> >
>>
>> >> ------
>>
>> >
>>
>> >> [1]
>>
>> >
>>
>> >>
>>
>> > https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>>
>> >
>>
>> >
>>
>> >> _______________________________________________
>>
>> >
>>
>> >> Rpm mailing list
>>
>> >
>>
>> >> Rpm@lists.bufferbloat.net
>>
>> >
>>
>> >> https://lists.bufferbloat.net/listinfo/rpm
>>
>> >
>>
>> > _______________________________________________
>>
>> >
>>
>> > Starlink mailing list
>>
>> >
>>
>> > Starlink@lists.bufferbloat.net
>>
>> >
>>
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-11 5:07 ` [Cake] [Rpm] [Bloat] [Starlink] " Dave Taht
@ 2023-01-11 11:05 ` Jay Moran
2023-01-12 16:03 ` Luis A. Cornejo
0 siblings, 1 reply; 26+ messages in thread
From: Jay Moran @ 2023-01-11 11:05 UTC (permalink / raw)
To: Dave Taht
Cc: Cake List, IETF IPPM WG, Luis A. Cornejo, MORTON JR.,
AL, Rpm, bloat, dickroy, libreqos
[-- Attachment #1: Type: text/plain, Size: 38495 bytes --]
Quick note from reading your blog entry.
Last night, I played with the Cloudflare Speedtest a little. It downloads
25MB and a 50MB (or 100MB, can’t remember) as well on a “speedier” network
after it does the 10MB file.
I was getting 1.2Gbs down and 760Mbs up, 4ms of LUL, and seeing those
larger file sizes. I was trying to screenshot and noticed I had those extra
file sizes I had to scroll down for. I ended up getting distracted and not
taking the shot to send. But, it will do a longer/bigger test under right
conditions.
Network here at the house is AT&T Fiber, 5Gbs up/down - limited to 3.6Gbs
down from Ubiquity UDM SE router/firewall with all IPS/Geo-blocking turned
on. 4.7Gbs non-blocking up. I am building a pfSense box to eliminate the
bottleneck. Couldn’t be happier, good job AS7018.
The machine I was testing from was Win10 wired 10Gbs and gets ~2.2Gbs
up/down for fast.com/speedtest.net. I haven’t take time to test internally
or try and tune that system, or might be CAT5e cabling issue… is fast
enough for me for that system.
Jay
On Wed, Jan 11, 2023 at 12:07 AM Dave Taht via Bloat <
bloat@lists.bufferbloat.net> wrote:
> Dear Luis:
>
> You hit 17 seconds of delay on your test.
>
> I got you beat, today, on my LTE connection, I cracked 182 seconds.
>
> I'd like to thank Verizon for making it possible for me to spew 4000
> words on my kvetches about the current speedtest regimes of speedtest,
> cloudflare, and so on, by making my network connection so lousy today
> that I sat in front of emacs to rant - and y'all for helping tone
> down, a little, this blog entry:
>
> https://blog.cerowrt.org/post/speedtests/
>
> On Tue, Jan 10, 2023 at 9:25 AM Luis A. Cornejo via Rpm
> <rpm@lists.bufferbloat.net> wrote:
> >
> > Here is my VZ HSI
> >
> >
> > No SQMm on
> >
> > On Sat, Jan 7, 2023 at 6:38 PM Dick Roy via Bloat <
> bloat@lists.bufferbloat.net> wrote:
> >>
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com]
> >> Sent: Friday, January 6, 2023 3:45 PM
> >> To: dickroy@alum.mit.edu
> >> Cc: 'MORTON JR., AL'; 'IETF IPPM WG'; 'libreqos'; 'Cake List'; 'Rpm';
> 'bloat'
> >> Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
> cloudflare'schristmas present
> >>
> >>
> >>
> >> yeah, I'd prefer not to output CLT sample groups at all but the
> >>
> >> histograms aren't really human readable and users constantly ask for
> >>
> >> them. I thought about providing a distance from the gaussian as output
> >>
> >> too but so far few would understand it and nobody I found would act upon
> >>
> >> it.
> >>
> >> [RR] Understandable until such metrics are “actionable”, and that’s “up
> to us to find/define/figure out” it seems to me. Metrics that are not
> actionable are write-only memory and good for little but historical recordJ
> >>
> >> The tool produces the full histograms so no information is really
> >>
> >> missing except for maybe better time series analysis.
> >>
> >> [RR] Isn’t that in fact what we are trying to extract from the e2e
> stats we collect? i.e., infer the time evolution of the system from its
> I/O behavior? As you point out, it’s really hard to do without probes in
> the guts of the system, nd yes, synchronization is important J
> >>
> >>
> >>
> >> The open source flows python code also released with iperf 2 does use
> >>
> >> the komogorov-smirnov distances & distance matrices to cluster when the
> >>
> >> number of histograms are just too much. We've analyzed 1M runs to fault
> >>
> >> isolate the "unexpected interruptions" or "bugs" and without statistical
> >>
> >> support it is just not doable. This does require instrumentation of the
> >>
> >> full path with mapping to a common clock domain (e.g. GPS) and not just
> >>
> >> e2e stats. I find an e2e complaint by an end user about "poor speed" as
> >>
> >> useful as telling a pharmacist I have a fever. Not much diagnostically
> >>
> >> is going on. Take an aspirin.
> >>
> >> [RR] That’s AWESOME!!!!!!!!!!!!!!!!!! I love that analogy!
> >>
> >>
> >>
> >> RR
> >>
> >>
> >>
> >> https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
> >>
> >> https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py
> >>
> >>
> >>
> >> Bob
> >>
> >> > See below …
> >>
> >> >
> >>
> >> > -----Original Message-----
> >>
> >> > From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
> >>
> >> > Behalf Of rjmcmahon via Starlink
> >>
> >> > Sent: Friday, January 6, 2023 12:39 PM
> >>
> >> > To: MORTON JR., AL
> >>
> >> > Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
> >>
> >> > bloat
> >>
> >> > Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
> >>
> >> > cloudflare'schristmas present
> >>
> >> >
> >>
> >> > Some thoughts are not to use UDP for testing here. Also, these speed
> >>
> >> >
> >>
> >> > tests have little to no information for network engineers about what's
> >>
> >> >
> >>
> >> >
> >>
> >> > going on. Iperf 2 may better assist network engineers but then I'm
> >>
> >> >
> >>
> >> > biased ;)
> >>
> >> >
> >>
> >> > Running iperf 2 https://sourceforge.net/projects/iperf2/ with
> >>
> >> >
> >>
> >> > --trip-times. Though the sampling and central limit theorem averaging
> >>
> >> > is
> >>
> >> >
> >>
> >> > hiding the real distributions (use --histograms to get those)
> >>
> >> >
> >>
> >> > _[RR] FWIW (IMNBWM __J)… If the output/final histograms indicate the
> >>
> >> > PDF is NOT Gaussian, then any application of the CLT is
> >>
> >> > inappropriate/contra-indicated! The CLT is a "proof under certain
> >>
> >> > regularity conditions/assumptions of underlying/constituent PDFs, that
> >>
> >> > the resulting PDF (after all the necessary convolutions are performed
> >>
> >> > to get to the PDF of the output) will asymptotically approach a
> >>
> >> > Gaussian with only a mean and a std. dev. left to specify. _
> >>
> >> >
> >>
> >> > Below are 4 parallel TCP streams from my home to one of my servers in
> >>
> >> >
> >>
> >> > the cloud. First where TCP is limited per CCA. Second is source side
> >>
> >> >
> >>
> >> > write rate limiting. Things to note:
> >>
> >> >
> >>
> >> > o) connect times for both at 10-15 ms
> >>
> >> >
> >>
> >> > o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
> >>
> >> >
> >>
> >> > Source side pacing eliminates retries
> >>
> >> >
> >>
> >> > o) Fairness with CCA isn't great but quite good with source side write
> >>
> >> >
> >>
> >> >
> >>
> >> > pacing
> >>
> >> >
> >>
> >> > o) Queue depth with CCA is about 150 Kbytes about 100K byte with
> >>
> >> > source
> >>
> >> >
> >>
> >> > side pacing
> >>
> >> >
> >>
> >> > o) min write to read is about 80 ms for both
> >>
> >> >
> >>
> >> > o) max is 220 ms vs 97 ms
> >>
> >> >
> >>
> >> > o) stdev for CCA write/read is 30 ms vs 3 ms
> >>
> >> >
> >>
> >> > o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
> >>
> >> >
> >>
> >> > TCP_QUICACK and TCP_NODELAY are both enabled.
> >>
> >> >
> >>
> >> > [ CT] final connect times (min/avg/max/stdev) =
> >>
> >> >
> >>
> >> > 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
> >>
> >> >
> >>
> >> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
> >>
> >> >
> >>
> >> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
> >>
> >> >
> >>
> >> > flows)
> >>
> >> >
> >>
> >> > Write buffer size: 131072 Byte
> >>
> >> >
> >>
> >> > TOS set to 0x0 and nodelay (Nagle off)
> >>
> >> >
> >>
> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> >>
> >> >
> >>
> >> > Event based writes (pending queue watermark at 16384 bytes)
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
> >>
> >> >
> >>
> >> > Cwnd/RTT(var) NetPwr
> >>
> >> >
> >>
> >> > ...
> >>
> >> >
> >>
> >> > [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
> >>
> >> >
> >>
> >> >
> >>
> >> > 29K/21088(1142) us 68.37
> >>
> >> >
> >>
> >> > [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
> >>
> >> >
> >>
> >> >
> >>
> >> > 31K/19284(612) us 88.36
> >>
> >> >
> >>
> >> > [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
> >>
> >> >
> >>
> >> > 16K/18996(658) us 48.30
> >>
> >> >
> >>
> >> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
> >>
> >> >
> >>
> >> > 18K/18133(208) us 57.83
> >>
> >> >
> >>
> >> > [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
> >>
> >> >
> >>
> >> > [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
> >>
> >> >
> >>
> >> >
> >>
> >> > 29K/14717(489) us 89.06
> >>
> >> >
> >>
> >> > [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
> >>
> >> >
> >>
> >> > 16K/15874(408) us 66.06
> >>
> >> >
> >>
> >> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
> >>
> >> >
> >>
> >> > 16K/15826(382) us 74.54
> >>
> >> >
> >>
> >> > [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
> >>
> >> >
> >>
> >> >
> >>
> >> > 9K/14878(557) us 106
> >>
> >> >
> >>
> >> > [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
> >>
> >> >
> >>
> >> > [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
> >>
> >> >
> >>
> >> >
> >>
> >> > 25K/15472(496) us 119
> >>
> >> >
> >>
> >> > [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
> >>
> >> >
> >>
> >> > 26K/16417(427) us 63.87
> >>
> >> >
> >>
> >> > [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
> >>
> >> >
> >>
> >> >
> >>
> >> > 16K/16268(679) us 80.57
> >>
> >> >
> >>
> >> > [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
> >>
> >> >
> >>
> >> > 15K/16629(799) us 63.06
> >>
> >> >
> >>
> >> > [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
> >>
> >> >
> >>
> >> > [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
> >>
> >> >
> >>
> >> >
> >>
> >> > 22K/13986(519) us 131
> >>
> >> >
> >>
> >> > [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
> >>
> >> >
> >>
> >> > 16K/12679(377) us 93.04
> >>
> >> >
> >>
> >> > [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
> >>
> >> >
> >>
> >> > 14K/12971(367) us 70.74
> >>
> >> >
> >>
> >> > [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
> >>
> >> >
> >>
> >> > 15K/14740(779) us 80.03
> >>
> >> >
> >>
> >> > [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
> >>
> >> >
> >>
> >> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > Server listening on TCP port 5001 with pid 233615
> >>
> >> >
> >>
> >> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
> >>
> >> >
> >>
> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 42480
> >>
> >> >
> >>
> >> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
> >>
> >> >
> >>
> >> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 42502
> >>
> >> >
> >>
> >> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
> >>
> >> >
> >>
> >> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 42484
> >>
> >> >
> >>
> >> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
> >>
> >> >
> >>
> >> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 42488
> >>
> >> >
> >>
> >> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
> >>
> >> >
> >>
> >> > [ ID] Interval Transfer Bandwidth Burst Latency
> >>
> >> >
> >>
> >> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
> >>
> >> >
> >>
> >> > ...
> >>
> >> >
> >>
> >> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
> >>
> >> >
> >>
> >> > 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
> >>
> >> >
> >>
> >> > 467=461:6:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
> >>
> >> >
> >>
> >> > 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
> >>
> >> >
> >>
> >> > 646=643:1:2:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
> >>
> >> >
> >>
> >> > 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
> >>
> >> >
> >>
> >> > 619=617:1:1:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
> >>
> >> >
> >>
> >> > 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
> >>
> >> >
> >>
> >> > 409=404:5:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
> >>
> >> >
> >>
> >> > 2141=2125:13:3:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
> >>
> >> >
> >>
> >> > 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
> >>
> >> >
> >>
> >> > 511=506:2:3:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
> >>
> >> >
> >>
> >> > 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
> >>
> >> >
> >>
> >> > 487=484:2:1:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
> >>
> >> >
> >>
> >> > 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
> >>
> >> >
> >>
> >> > 640=636:1:3:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
> >>
> >> >
> >>
> >> > 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
> >>
> >> >
> >>
> >> > 415=413:2:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
> >>
> >> >
> >>
> >> > 2053=2039:7:7:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
> >>
> >> >
> >>
> >> > 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
> >>
> >> >
> >>
> >> > 711=707:3:1:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
> >>
> >> >
> >>
> >> > 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
> >>
> >> >
> >>
> >> > 494=488:5:1:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
> >>
> >> >
> >>
> >> > 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
> >>
> >> >
> >>
> >> > 408=403:5:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
> >>
> >> >
> >>
> >> > 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
> >>
> >> >
> >>
> >> > 484=480:4:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
> >>
> >> >
> >>
> >> > 2097=2078:17:2:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
> >>
> >> >
> >>
> >> > 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
> >>
> >> >
> >>
> >> > 692=687:2:3:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
> >>
> >> >
> >>
> >> > 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
> >>
> >> >
> >>
> >> > 429=425:4:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
> >>
> >> >
> >>
> >> > 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
> >>
> >> >
> >>
> >> > 520=516:4:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
> >>
> >> >
> >>
> >> > 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
> >>
> >> >
> >>
> >> > 527=524:1:2:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
> >>
> >> >
> >>
> >> > 2168=2152:11:5:0:0:0:0:0
> >>
> >> >
> >>
> >> > With source side rate limiting to 9 mb/s per stream.
> >>
> >> >
> >>
> >> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
> >>
> >> >
> >>
> >> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
> >>
> >> >
> >>
> >> > flows)
> >>
> >> >
> >>
> >> > Write buffer size: 131072 Byte
> >>
> >> >
> >>
> >> > TOS set to 0x0 and nodelay (Nagle off)
> >>
> >> >
> >>
> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> >>
> >> >
> >>
> >> > Event based writes (pending queue watermark at 16384 bytes)
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
> >>
> >> >
> >>
> >> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
> >>
> >> >
> >>
> >> > (PST)
> >>
> >> >
> >>
> >> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
> >>
> >> >
> >>
> >> > Cwnd/RTT(var) NetPwr
> >>
> >> >
> >>
> >> > ...
> >>
> >> >
> >>
> >> > [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 134K/88055(12329) us 11.91
> >>
> >> >
> >>
> >> > [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 132K/74867(11755) us 14.01
> >>
> >> >
> >>
> >> > [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 134K/89101(13134) us 11.77
> >>
> >> >
> >>
> >> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 131K/91451(11938) us 11.47
> >>
> >> >
> >>
> >> > [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
> >>
> >> >
> >>
> >> > [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 134K/85135(14580) us 13.86
> >>
> >> >
> >>
> >> > [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 132K/85124(15654) us 13.86
> >>
> >> >
> >>
> >> > [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 134K/91336(11335) us 12.92
> >>
> >> >
> >>
> >> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 131K/89185(13499) us 13.23
> >>
> >> >
> >>
> >> > [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
> >>
> >> >
> >>
> >> > [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 134K/85687(13489) us 13.77
> >>
> >> >
> >>
> >> > [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 132K/82803(13001) us 14.25
> >>
> >> >
> >>
> >> > [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 134K/86869(15186) us 13.58
> >>
> >> >
> >>
> >> > [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> >>
> >> >
> >>
> >> > 131K/91447(12515) us 12.90
> >>
> >> >
> >>
> >> > [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
> >>
> >> >
> >>
> >> > [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 134K/81814(13168) us 12.82
> >>
> >> >
> >>
> >> > [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 132K/89008(13283) us 11.78
> >>
> >> >
> >>
> >> > [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 134K/89494(12151) us 11.72
> >>
> >> >
> >>
> >> > [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> >>
> >> >
> >>
> >> > 131K/91083(12797) us 11.51
> >>
> >> >
> >>
> >> > [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
> >>
> >> >
> >>
> >> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > Server listening on TCP port 5001 with pid 233981
> >>
> >> >
> >>
> >> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
> >>
> >> >
> >>
> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> >>
> >> >
> >>
> >> > ------------------------------------------------------------
> >>
> >> >
> >>
> >> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 46448
> >>
> >> >
> >>
> >> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
> >>
> >> >
> >>
> >> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 46454
> >>
> >> >
> >>
> >> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
> >>
> >> >
> >>
> >> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 46460
> >>
> >> >
> >>
> >> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
> >>
> >> >
> >>
> >> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> >>
> >> > 46458
> >>
> >> >
> >>
> >> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
> >>
> >> >
> >>
> >> > (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
> >>
> >> >
> >>
> >> > [ ID] Interval Transfer Bandwidth Burst Latency
> >>
> >> >
> >>
> >> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
> >>
> >> >
> >>
> >> > [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
> >>
> >> >
> >>
> >> > 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
> >>
> >> >
> >>
> >> > 451=451:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
> >>
> >> >
> >>
> >> > 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
> >>
> >> >
> >>
> >> > 472=472:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
> >>
> >> >
> >>
> >> > 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
> >>
> >> >
> >>
> >> > 495=495:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
> >>
> >> >
> >>
> >> > 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
> >>
> >> >
> >>
> >> > 489=489:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ ID] Interval Transfer Bandwidth Reads=Dist
> >>
> >> >
> >>
> >> > [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
> >>
> >> >
> >>
> >> > 1907=1907:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
> >>
> >> >
> >>
> >> > 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
> >>
> >> >
> >>
> >> > 492=492:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
> >>
> >> >
> >>
> >> > 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
> >>
> >> >
> >>
> >> > 480=480:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
> >>
> >> >
> >>
> >> > 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
> >>
> >> >
> >>
> >> > 513=513:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
> >>
> >> >
> >>
> >> > 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
> >>
> >> >
> >>
> >> > 489=489:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
> >>
> >> >
> >>
> >> > 1974=1974:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
> >>
> >> >
> >>
> >> > 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
> >>
> >> >
> >>
> >> > 497=497:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
> >>
> >> >
> >>
> >> > 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
> >>
> >> >
> >>
> >> > 503=503:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
> >>
> >> >
> >>
> >> > 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
> >>
> >> >
> >>
> >> > 509=509:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
> >>
> >> >
> >>
> >> > 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
> >>
> >> >
> >>
> >> > 509=509:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
> >>
> >> >
> >>
> >> > 2018=2018:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
> >>
> >> >
> >>
> >> > 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
> >>
> >> >
> >>
> >> > 487=487:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
> >>
> >> >
> >>
> >> > 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
> >>
> >> >
> >>
> >> > 489=489:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
> >>
> >> >
> >>
> >> > 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
> >>
> >> >
> >>
> >> > 493=493:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
> >>
> >> >
> >>
> >> > 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
> >>
> >> >
> >>
> >> > 493=493:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
> >>
> >> >
> >>
> >> > 1962=1962:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
> >>
> >> >
> >>
> >> > 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
> >>
> >> >
> >>
> >> > 498=498:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
> >>
> >> >
> >>
> >> > 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
> >>
> >> >
> >>
> >> > 490=490:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
> >>
> >> >
> >>
> >> > 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
> >>
> >> >
> >>
> >> > 481=481:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
> >>
> >> >
> >>
> >> > 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
> >>
> >> >
> >>
> >> > 484=484:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
> >>
> >> >
> >>
> >> > 1953=1953:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
> >>
> >> >
> >>
> >> > 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
> >>
> >> >
> >>
> >> > 467=467:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
> >>
> >> >
> >>
> >> > 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
> >>
> >> >
> >>
> >> > 478=478:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
> >>
> >> >
> >>
> >> > 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
> >>
> >> >
> >>
> >> > 491=491:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
> >>
> >> >
> >>
> >> > 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
> >>
> >> >
> >>
> >> > 488=488:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
> >>
> >> >
> >>
> >> > 1924=1924:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
> >>
> >> >
> >>
> >> > 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
> >>
> >> >
> >>
> >> > 485=485:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
> >>
> >> >
> >>
> >> > 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
> >>
> >> >
> >>
> >> > 472=472:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
> >>
> >> >
> >>
> >> > 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
> >>
> >> >
> >>
> >> > 477=477:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
> >>
> >> >
> >>
> >> > 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
> >>
> >> >
> >>
> >> > 494=494:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
> >>
> >> >
> >>
> >> > 1928=1928:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
> >>
> >> >
> >>
> >> > 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
> >>
> >> >
> >>
> >> > 510=510:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
> >>
> >> >
> >>
> >> > 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
> >>
> >> >
> >>
> >> > 480=480:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
> >>
> >> >
> >>
> >> > 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
> >>
> >> >
> >>
> >> > 515=515:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
> >>
> >> >
> >>
> >> > 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
> >>
> >> >
> >>
> >> > 508=508:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
> >>
> >> >
> >>
> >> > 2013=2013:0:0:0:0:0:0:0
> >>
> >> >
> >>
> >> > Bob
> >>
> >> >
> >>
> >> >>> -----Original Message-----
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf
> >>
> >> > Of
> >>
> >> >
> >>
> >> >> Dave Taht
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> via LibreQoS
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> Sent: Wednesday, January 4, 2023 12:26 PM
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
> >>
> >> >
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>>
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> Please try the new, the shiny, the really wonderful test here:
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>>
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
> >>
> >> >
> >>
> >> >
> >>
> >> >> [1]
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>>
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
> >>
> >> >
> >>
> >> >
> >>
> >> >> [1]
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>>
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> I would really appreciate some independent verification of
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> measurements using this tool. In my brief experiments it appears -
> >>
> >> >
> >>
> >> >> as
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> all the commercial tools to date - to dramatically understate the
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> bufferbloat, on my LTE, (and my starlink terminal is out being
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> [acm]
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> Hi Dave, I made some time to test "cloudflare's christmas present"
> >>
> >> >
> >>
> >> >> yesterday.
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a
> >>
> >> > "turbo"
> >>
> >> >
> >>
> >> >> mode with 40-50Mbps for the first ~3 sec, then steady-state about
> >>
> >> >
> >>
> >> >> 23Mbps.
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> When I saw the ~620Mbps Downstream measurement, I was ready to
> >>
> >> >
> >>
> >> >> complain that even the IP-Layer Capacity was grossly underestimated.
> >>
> >> >
> >>
> >> >
> >>
> >> >> In addition, the Latency measurements seem very low (as you
> >>
> >> > asserted),
> >>
> >> >
> >>
> >> >> although the cloud server was "nearby".
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> However, I found that Ookla and the ISP-provided measurement were
> >>
> >> > also
> >>
> >> >
> >>
> >> >> reporting ~600Mbps! So the cloudflare Downstream capacity (or
> >>
> >> >
> >>
> >> >> throughput?) measurement was consistent with others. Our UDPST
> >>
> >> > server
> >>
> >> >
> >>
> >> >> was unreachable, otherwise I would have added that measurement, too.
> >>
> >> >
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> The Upstream measurement graph seems to illustrate the "turbo"
> >>
> >> >
> >>
> >> >> mode, with the dip after attaining 44.5Mbps.
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> UDPST saturates the uplink and we measure the full 250ms of the
> >>
> >> >
> >>
> >> >> Upstream buffer. Cloudflare's latency measurements don't even come
> >>
> >> >
> >>
> >> >> close.
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> Al
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> >>
> >> >> Links:
> >>
> >> >
> >>
> >> >> ------
> >>
> >> >
> >>
> >> >> [1]
> >>
> >> >
> >>
> >> >>
> >>
> >> >
> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
> >>
> >> >
> >>
> >> >
> >>
> >> >> _______________________________________________
> >>
> >> >
> >>
> >> >> Rpm mailing list
> >>
> >> >
> >>
> >> >> Rpm@lists.bufferbloat.net
> >>
> >> >
> >>
> >> >> https://lists.bufferbloat.net/listinfo/rpm
> >>
> >> >
> >>
> >> > _______________________________________________
> >>
> >> >
> >>
> >> > Starlink mailing list
> >>
> >> >
> >>
> >> > Starlink@lists.bufferbloat.net
> >>
> >> >
> >>
> >> > https://lists.bufferbloat.net/listinfo/starlink
> >>
> >> _______________________________________________
> >> Bloat mailing list
> >> Bloat@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
--
Jay Moran
http://linkedin.com/in/jaycmoran
[-- Attachment #2: Type: text/html, Size: 60975 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-11 11:05 ` [Cake] [Bloat] [Rpm] " Jay Moran
@ 2023-01-12 16:03 ` Luis A. Cornejo
2023-01-12 16:12 ` Dave Taht
0 siblings, 1 reply; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-12 16:03 UTC (permalink / raw)
To: Jay Moran
Cc: Dave Taht, Cake List, IETF IPPM WG, MORTON JR.,
AL, Rpm, bloat, dickroy, libreqos
[-- Attachment #1.1: Type: text/plain, Size: 40668 bytes --]
Here is on Starlink:
On Wed, Jan 11, 2023 at 5:05 AM Jay Moran <jay@tp.org> wrote:
> Quick note from reading your blog entry.
>
> Last night, I played with the Cloudflare Speedtest a little. It downloads
> 25MB and a 50MB (or 100MB, can’t remember) as well on a “speedier” network
> after it does the 10MB file.
>
> I was getting 1.2Gbs down and 760Mbs up, 4ms of LUL, and seeing those
> larger file sizes. I was trying to screenshot and noticed I had those extra
> file sizes I had to scroll down for. I ended up getting distracted and not
> taking the shot to send. But, it will do a longer/bigger test under right
> conditions.
>
> Network here at the house is AT&T Fiber, 5Gbs up/down - limited to 3.6Gbs
> down from Ubiquity UDM SE router/firewall with all IPS/Geo-blocking turned
> on. 4.7Gbs non-blocking up. I am building a pfSense box to eliminate the
> bottleneck. Couldn’t be happier, good job AS7018.
>
> The machine I was testing from was Win10 wired 10Gbs and gets ~2.2Gbs
> up/down for fast.com/speedtest.net. I haven’t take time to test
> internally or try and tune that system, or might be CAT5e cabling issue… is
> fast enough for me for that system.
>
> Jay
>
> On Wed, Jan 11, 2023 at 12:07 AM Dave Taht via Bloat <
> bloat@lists.bufferbloat.net> wrote:
>
>> Dear Luis:
>>
>> You hit 17 seconds of delay on your test.
>>
>> I got you beat, today, on my LTE connection, I cracked 182 seconds.
>>
>> I'd like to thank Verizon for making it possible for me to spew 4000
>> words on my kvetches about the current speedtest regimes of speedtest,
>> cloudflare, and so on, by making my network connection so lousy today
>> that I sat in front of emacs to rant - and y'all for helping tone
>> down, a little, this blog entry:
>>
>> https://blog.cerowrt.org/post/speedtests/
>>
>> On Tue, Jan 10, 2023 at 9:25 AM Luis A. Cornejo via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>> >
>> > Here is my VZ HSI
>> >
>> >
>> > No SQMm on
>> >
>> > On Sat, Jan 7, 2023 at 6:38 PM Dick Roy via Bloat <
>> bloat@lists.bufferbloat.net> wrote:
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> -----Original Message-----
>> >> From: rjmcmahon [mailto:rjmcmahon@rjmcmahon.com]
>> >> Sent: Friday, January 6, 2023 3:45 PM
>> >> To: dickroy@alum.mit.edu
>> >> Cc: 'MORTON JR., AL'; 'IETF IPPM WG'; 'libreqos'; 'Cake List'; 'Rpm';
>> 'bloat'
>> >> Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
>> cloudflare'schristmas present
>> >>
>> >>
>> >>
>> >> yeah, I'd prefer not to output CLT sample groups at all but the
>> >>
>> >> histograms aren't really human readable and users constantly ask for
>> >>
>> >> them. I thought about providing a distance from the gaussian as output
>> >>
>> >> too but so far few would understand it and nobody I found would act
>> upon
>> >>
>> >> it.
>> >>
>> >> [RR] Understandable until such metrics are “actionable”, and that’s
>> “up to us to find/define/figure out” it seems to me. Metrics that are not
>> actionable are write-only memory and good for little but historical recordJ
>> >>
>> >> The tool produces the full histograms so no information is really
>> >>
>> >> missing except for maybe better time series analysis.
>> >>
>> >> [RR] Isn’t that in fact what we are trying to extract from the e2e
>> stats we collect? i.e., infer the time evolution of the system from its
>> I/O behavior? As you point out, it’s really hard to do without probes in
>> the guts of the system, nd yes, synchronization is important J
>> >>
>> >>
>> >>
>> >> The open source flows python code also released with iperf 2 does use
>> >>
>> >> the komogorov-smirnov distances & distance matrices to cluster when the
>> >>
>> >> number of histograms are just too much. We've analyzed 1M runs to fault
>> >>
>> >> isolate the "unexpected interruptions" or "bugs" and without
>> statistical
>> >>
>> >> support it is just not doable. This does require instrumentation of the
>> >>
>> >> full path with mapping to a common clock domain (e.g. GPS) and not just
>> >>
>> >> e2e stats. I find an e2e complaint by an end user about "poor speed" as
>> >>
>> >> useful as telling a pharmacist I have a fever. Not much diagnostically
>> >>
>> >> is going on. Take an aspirin.
>> >>
>> >> [RR] That’s AWESOME!!!!!!!!!!!!!!!!!! I love that analogy!
>> >>
>> >>
>> >>
>> >> RR
>> >>
>> >>
>> >>
>> >> https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
>> >>
>> >> https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py
>> >>
>> >>
>> >>
>> >> Bob
>> >>
>> >> > See below …
>> >>
>> >> >
>> >>
>> >> > -----Original Message-----
>> >>
>> >> > From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On
>> >>
>> >> > Behalf Of rjmcmahon via Starlink
>> >>
>> >> > Sent: Friday, January 6, 2023 12:39 PM
>> >>
>> >> > To: MORTON JR., AL
>> >>
>> >> > Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
>> >>
>> >> > bloat
>> >>
>> >> > Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
>> >>
>> >> > cloudflare'schristmas present
>> >>
>> >> >
>> >>
>> >> > Some thoughts are not to use UDP for testing here. Also, these speed
>> >>
>> >> >
>> >>
>> >> > tests have little to no information for network engineers about
>> what's
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > going on. Iperf 2 may better assist network engineers but then I'm
>> >>
>> >> >
>> >>
>> >> > biased ;)
>> >>
>> >> >
>> >>
>> >> > Running iperf 2 https://sourceforge.net/projects/iperf2/ with
>> >>
>> >> >
>> >>
>> >> > --trip-times. Though the sampling and central limit theorem averaging
>> >>
>> >> > is
>> >>
>> >> >
>> >>
>> >> > hiding the real distributions (use --histograms to get those)
>> >>
>> >> >
>> >>
>> >> > _[RR] FWIW (IMNBWM __J)… If the output/final histograms indicate the
>> >>
>> >> > PDF is NOT Gaussian, then any application of the CLT is
>> >>
>> >> > inappropriate/contra-indicated! The CLT is a "proof under certain
>> >>
>> >> > regularity conditions/assumptions of underlying/constituent PDFs,
>> that
>> >>
>> >> > the resulting PDF (after all the necessary convolutions are performed
>> >>
>> >> > to get to the PDF of the output) will asymptotically approach a
>> >>
>> >> > Gaussian with only a mean and a std. dev. left to specify. _
>> >>
>> >> >
>> >>
>> >> > Below are 4 parallel TCP streams from my home to one of my servers in
>> >>
>> >> >
>> >>
>> >> > the cloud. First where TCP is limited per CCA. Second is source side
>> >>
>> >> >
>> >>
>> >> > write rate limiting. Things to note:
>> >>
>> >> >
>> >>
>> >> > o) connect times for both at 10-15 ms
>> >>
>> >> >
>> >>
>> >> > o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
>> >>
>> >> >
>> >>
>> >> > Source side pacing eliminates retries
>> >>
>> >> >
>> >>
>> >> > o) Fairness with CCA isn't great but quite good with source side
>> write
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > pacing
>> >>
>> >> >
>> >>
>> >> > o) Queue depth with CCA is about 150 Kbytes about 100K byte with
>> >>
>> >> > source
>> >>
>> >> >
>> >>
>> >> > side pacing
>> >>
>> >> >
>> >>
>> >> > o) min write to read is about 80 ms for both
>> >>
>> >> >
>> >>
>> >> > o) max is 220 ms vs 97 ms
>> >>
>> >> >
>> >>
>> >> > o) stdev for CCA write/read is 30 ms vs 3 ms
>> >>
>> >> >
>> >>
>> >> > o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
>> >>
>> >> >
>> >>
>> >> > TCP_QUICACK and TCP_NODELAY are both enabled.
>> >>
>> >> >
>> >>
>> >> > [ CT] final connect times (min/avg/max/stdev) =
>> >>
>> >> >
>> >>
>> >> > 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
>> >>
>> >> >
>> >>
>> >> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>> >>
>> >> >
>> >>
>> >> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
>> >>
>> >> >
>> >>
>> >> > flows)
>> >>
>> >> >
>> >>
>> >> > Write buffer size: 131072 Byte
>> >>
>> >> >
>> >>
>> >> > TOS set to 0x0 and nodelay (Nagle off)
>> >>
>> >> >
>> >>
>> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>> >>
>> >> >
>> >>
>> >> > Event based writes (pending queue watermark at 16384 bytes)
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
>> >>
>> >> >
>> >>
>> >> > Cwnd/RTT(var) NetPwr
>> >>
>> >> >
>> >>
>> >> > ...
>> >>
>> >> >
>> >>
>> >> > [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 29K/21088(1142) us 68.37
>> >>
>> >> >
>> >>
>> >> > [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 31K/19284(612) us 88.36
>> >>
>> >> >
>> >>
>> >> > [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>> >>
>> >> >
>> >>
>> >> > 16K/18996(658) us 48.30
>> >>
>> >> >
>> >>
>> >> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
>> >>
>> >> >
>> >>
>> >> > 18K/18133(208) us 57.83
>> >>
>> >> >
>> >>
>> >> > [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
>> >>
>> >> >
>> >>
>> >> > [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 29K/14717(489) us 89.06
>> >>
>> >> >
>> >>
>> >> > [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
>> >>
>> >> >
>> >>
>> >> > 16K/15874(408) us 66.06
>> >>
>> >> >
>> >>
>> >> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>> >>
>> >> >
>> >>
>> >> > 16K/15826(382) us 74.54
>> >>
>> >> >
>> >>
>> >> > [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 9K/14878(557) us 106
>> >>
>> >> >
>> >>
>> >> > [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
>> >>
>> >> >
>> >>
>> >> > [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 25K/15472(496) us 119
>> >>
>> >> >
>> >>
>> >> > [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
>> >>
>> >> >
>> >>
>> >> > 26K/16417(427) us 63.87
>> >>
>> >> >
>> >>
>> >> > [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 16K/16268(679) us 80.57
>> >>
>> >> >
>> >>
>> >> > [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
>> >>
>> >> >
>> >>
>> >> > 15K/16629(799) us 63.06
>> >>
>> >> >
>> >>
>> >> > [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
>> >>
>> >> >
>> >>
>> >> > [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> > 22K/13986(519) us 131
>> >>
>> >> >
>> >>
>> >> > [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
>> >>
>> >> >
>> >>
>> >> > 16K/12679(377) us 93.04
>> >>
>> >> >
>> >>
>> >> > [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
>> >>
>> >> >
>> >>
>> >> > 14K/12971(367) us 70.74
>> >>
>> >> >
>> >>
>> >> > [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
>> >>
>> >> >
>> >>
>> >> > 15K/14740(779) us 80.03
>> >>
>> >> >
>> >>
>> >> > [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>> >>
>> >> >
>> >>
>> >> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > Server listening on TCP port 5001 with pid 233615
>> >>
>> >> >
>> >>
>> >> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>> >>
>> >> >
>> >>
>> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 42480
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
>> >>
>> >> >
>> >>
>> >> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 42502
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
>> >>
>> >> >
>> >>
>> >> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 42484
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
>> >>
>> >> >
>> >>
>> >> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 42488
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
>> >>
>> >> >
>> >>
>> >> > [ ID] Interval Transfer Bandwidth Burst Latency
>> >>
>> >> >
>> >>
>> >> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>> >>
>> >> >
>> >>
>> >> > ...
>> >>
>> >> >
>> >>
>> >> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
>> >>
>> >> >
>> >>
>> >> > 467=461:6:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
>> >>
>> >> >
>> >>
>> >> > 646=643:1:2:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
>> >>
>> >> >
>> >>
>> >> > 619=617:1:1:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
>> >>
>> >> >
>> >>
>> >> > 409=404:5:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 2141=2125:13:3:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
>> >>
>> >> >
>> >>
>> >> > 511=506:2:3:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
>> >>
>> >> >
>> >>
>> >> > 487=484:2:1:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
>> >>
>> >> >
>> >>
>> >> > 640=636:1:3:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
>> >>
>> >> >
>> >>
>> >> > 415=413:2:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 2053=2039:7:7:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
>> >>
>> >> >
>> >>
>> >> > 711=707:3:1:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
>> >>
>> >> >
>> >>
>> >> > 494=488:5:1:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
>> >>
>> >> >
>> >>
>> >> > 408=403:5:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
>> >>
>> >> >
>> >>
>> >> > 484=480:4:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 2097=2078:17:2:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
>> >>
>> >> >
>> >>
>> >> > 692=687:2:3:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
>> >>
>> >> >
>> >>
>> >> > 429=425:4:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
>> >>
>> >> >
>> >>
>> >> > 520=516:4:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
>> >>
>> >> >
>> >>
>> >> > 527=524:1:2:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 2168=2152:11:5:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > With source side rate limiting to 9 mb/s per stream.
>> >>
>> >> >
>> >>
>> >> > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
>> >>
>> >> >
>> >>
>> >> > --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
>> >>
>> >> >
>> >>
>> >> > flows)
>> >>
>> >> >
>> >>
>> >> > Write buffer size: 131072 Byte
>> >>
>> >> >
>> >>
>> >> > TOS set to 0x0 and nodelay (Nagle off)
>> >>
>> >> >
>> >>
>> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>> >>
>> >> >
>> >>
>> >> > Event based writes (pending queue watermark at 16384 bytes)
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
>> >>
>> >> >
>> >>
>> >> > 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
>> >>
>> >> >
>> >>
>> >> > (PST)
>> >>
>> >> >
>> >>
>> >> > [ ID] Interval Transfer Bandwidth Write/Err Rtry
>> >>
>> >> >
>> >>
>> >> > Cwnd/RTT(var) NetPwr
>> >>
>> >> >
>> >>
>> >> > ...
>> >>
>> >> >
>> >>
>> >> > [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/88055(12329) us 11.91
>> >>
>> >> >
>> >>
>> >> > [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 132K/74867(11755) us 14.01
>> >>
>> >> >
>> >>
>> >> > [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/89101(13134) us 11.77
>> >>
>> >> >
>> >>
>> >> > [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 131K/91451(11938) us 11.47
>> >>
>> >> >
>> >>
>> >> > [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>> >>
>> >> >
>> >>
>> >> > [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/85135(14580) us 13.86
>> >>
>> >> >
>> >>
>> >> > [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 132K/85124(15654) us 13.86
>> >>
>> >> >
>> >>
>> >> > [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/91336(11335) us 12.92
>> >>
>> >> >
>> >>
>> >> > [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 131K/89185(13499) us 13.23
>> >>
>> >> >
>> >>
>> >> > [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>> >>
>> >> >
>> >>
>> >> > [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/85687(13489) us 13.77
>> >>
>> >> >
>> >>
>> >> > [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 132K/82803(13001) us 14.25
>> >>
>> >> >
>> >>
>> >> > [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/86869(15186) us 13.58
>> >>
>> >> >
>> >>
>> >> > [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
>> >>
>> >> >
>> >>
>> >> > 131K/91447(12515) us 12.90
>> >>
>> >> >
>> >>
>> >> > [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
>> >>
>> >> >
>> >>
>> >> > [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/81814(13168) us 12.82
>> >>
>> >> >
>> >>
>> >> > [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 132K/89008(13283) us 11.78
>> >>
>> >> >
>> >>
>> >> > [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 134K/89494(12151) us 11.72
>> >>
>> >> >
>> >>
>> >> > [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
>> >>
>> >> >
>> >>
>> >> > 131K/91083(12797) us 11.51
>> >>
>> >> >
>> >>
>> >> > [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>> >>
>> >> >
>> >>
>> >> > [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > Server listening on TCP port 5001 with pid 233981
>> >>
>> >> >
>> >>
>> >> > Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
>> >>
>> >> >
>> >>
>> >> > TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
>> >>
>> >> >
>> >>
>> >> > ------------------------------------------------------------
>> >>
>> >> >
>> >>
>> >> > [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 46448
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=4) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
>> >>
>> >> >
>> >>
>> >> > [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 46454
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=5) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
>> >>
>> >> >
>> >>
>> >> > [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 46460
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=6) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
>> >>
>> >> >
>> >>
>> >> > [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
>> >>
>> >> > 46458
>> >>
>> >> >
>> >>
>> >> > (trip-times) (sock=7) (peer 2.1.9-master) (qack)
>> >>
>> >> >
>> >>
>> >> > (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
>> >>
>> >> >
>> >>
>> >> > [ ID] Interval Transfer Bandwidth Burst Latency
>> >>
>> >> >
>> >>
>> >> > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
>> >>
>> >> >
>> >>
>> >> > [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
>> >>
>> >> >
>> >>
>> >> > 451=451:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
>> >>
>> >> >
>> >>
>> >> > 472=472:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
>> >>
>> >> >
>> >>
>> >> > 495=495:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
>> >>
>> >> >
>> >>
>> >> > 489=489:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ ID] Interval Transfer Bandwidth Reads=Dist
>> >>
>> >> >
>> >>
>> >> > [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 1907=1907:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
>> >>
>> >> >
>> >>
>> >> > 492=492:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
>> >>
>> >> >
>> >>
>> >> > 480=480:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
>> >>
>> >> >
>> >>
>> >> > 513=513:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
>> >>
>> >> >
>> >>
>> >> > 489=489:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 1974=1974:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
>> >>
>> >> >
>> >>
>> >> > 497=497:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
>> >>
>> >> >
>> >>
>> >> > 503=503:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
>> >>
>> >> >
>> >>
>> >> > 509=509:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
>> >>
>> >> >
>> >>
>> >> > 509=509:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 2018=2018:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
>> >>
>> >> >
>> >>
>> >> > 487=487:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
>> >>
>> >> >
>> >>
>> >> > 489=489:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
>> >>
>> >> >
>> >>
>> >> > 493=493:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
>> >>
>> >> >
>> >>
>> >> > 493=493:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 1962=1962:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
>> >>
>> >> >
>> >>
>> >> > 498=498:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
>> >>
>> >> >
>> >>
>> >> > 490=490:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
>> >>
>> >> >
>> >>
>> >> > 481=481:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
>> >>
>> >> >
>> >>
>> >> > 484=484:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 1953=1953:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
>> >>
>> >> >
>> >>
>> >> > 467=467:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
>> >>
>> >> >
>> >>
>> >> > 478=478:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
>> >>
>> >> >
>> >>
>> >> > 491=491:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
>> >>
>> >> >
>> >>
>> >> > 488=488:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 1924=1924:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
>> >>
>> >> >
>> >>
>> >> > 485=485:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
>> >>
>> >> >
>> >>
>> >> > 472=472:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
>> >>
>> >> >
>> >>
>> >> > 477=477:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
>> >>
>> >> >
>> >>
>> >> > 494=494:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 1928=1928:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
>> >>
>> >> >
>> >>
>> >> > 510=510:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
>> >>
>> >> >
>> >>
>> >> > 480=480:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
>> >>
>> >> >
>> >>
>> >> > 515=515:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
>> >>
>> >> >
>> >>
>> >> > 508=508:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
>> >>
>> >> >
>> >>
>> >> > 2013=2013:0:0:0:0:0:0:0
>> >>
>> >> >
>> >>
>> >> > Bob
>> >>
>> >> >
>> >>
>> >> >>> -----Original Message-----
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf
>> >>
>> >> > Of
>> >>
>> >> >
>> >>
>> >> >> Dave Taht
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> via LibreQoS
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> Sent: Wednesday, January 4, 2023 12:26 PM
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>>
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> Please try the new, the shiny, the really wonderful test here:
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>>
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> >> [1]
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>>
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> >> [1]
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>>
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> I would really appreciate some independent verification of
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> measurements using this tool. In my brief experiments it appears -
>> >>
>> >> >
>> >>
>> >> >> as
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> all the commercial tools to date - to dramatically understate the
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> bufferbloat, on my LTE, (and my starlink terminal is out being
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> [acm]
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> Hi Dave, I made some time to test "cloudflare's christmas present"
>> >>
>> >> >
>> >>
>> >> >> yesterday.
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a
>> >>
>> >> > "turbo"
>> >>
>> >> >
>> >>
>> >> >> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>> >>
>> >> >
>> >>
>> >> >> 23Mbps.
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> When I saw the ~620Mbps Downstream measurement, I was ready to
>> >>
>> >> >
>> >>
>> >> >> complain that even the IP-Layer Capacity was grossly underestimated.
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> >> In addition, the Latency measurements seem very low (as you
>> >>
>> >> > asserted),
>> >>
>> >> >
>> >>
>> >> >> although the cloud server was "nearby".
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> However, I found that Ookla and the ISP-provided measurement were
>> >>
>> >> > also
>> >>
>> >> >
>> >>
>> >> >> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>> >>
>> >> >
>> >>
>> >> >> throughput?) measurement was consistent with others. Our UDPST
>> >>
>> >> > server
>> >>
>> >> >
>> >>
>> >> >> was unreachable, otherwise I would have added that measurement, too.
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> The Upstream measurement graph seems to illustrate the "turbo"
>> >>
>> >> >
>> >>
>> >> >> mode, with the dip after attaining 44.5Mbps.
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> UDPST saturates the uplink and we measure the full 250ms of the
>> >>
>> >> >
>> >>
>> >> >> Upstream buffer. Cloudflare's latency measurements don't even come
>> >>
>> >> >
>> >>
>> >> >> close.
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> Al
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> >>
>> >> >> Links:
>> >>
>> >> >
>> >>
>> >> >> ------
>> >>
>> >> >
>> >>
>> >> >> [1]
>> >>
>> >> >
>> >>
>> >> >>
>> >>
>> >> >
>> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>> >>
>> >> >
>> >>
>> >> >
>> >>
>> >> >> _______________________________________________
>> >>
>> >> >
>> >>
>> >> >> Rpm mailing list
>> >>
>> >> >
>> >>
>> >> >> Rpm@lists.bufferbloat.net
>> >>
>> >> >
>> >>
>> >> >> https://lists.bufferbloat.net/listinfo/rpm
>> >>
>> >> >
>> >>
>> >> > _______________________________________________
>> >>
>> >> >
>> >>
>> >> > Starlink mailing list
>> >>
>> >> >
>> >>
>> >> > Starlink@lists.bufferbloat.net
>> >>
>> >> >
>> >>
>> >> > https://lists.bufferbloat.net/listinfo/starlink
>> >>
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >
>> > _______________________________________________
>> > Rpm mailing list
>> > Rpm@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/rpm
>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
> --
> --
> Jay Moran
> http://linkedin.com/in/jaycmoran
>
[-- Attachment #1.2: Type: text/html, Size: 61304 bytes --]
[-- Attachment #2: Web capture_12-1-2023_1018_speed.cloudflare.com.jpeg --]
[-- Type: image/jpeg, Size: 222440 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-12 16:03 ` Luis A. Cornejo
@ 2023-01-12 16:12 ` Dave Taht
2023-01-12 16:21 ` Luis A. Cornejo
2023-01-12 17:43 ` MORTON JR., AL
0 siblings, 2 replies; 26+ messages in thread
From: Dave Taht @ 2023-01-12 16:12 UTC (permalink / raw)
To: Luis A. Cornejo
Cc: Jay Moran, Cake List, IETF IPPM WG, MORTON JR.,
AL, Rpm, bloat, dickroy, libreqos
Either starlink has vastly improved, or the test is way off in this case.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-12 16:12 ` Dave Taht
@ 2023-01-12 16:21 ` Luis A. Cornejo
2023-01-12 17:43 ` MORTON JR., AL
1 sibling, 0 replies; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-12 16:21 UTC (permalink / raw)
To: Dave Taht
Cc: Jay Moran, Cake List, IETF IPPM WG, MORTON JR.,
AL, Rpm, bloat, dickroy, libreqos
[-- Attachment #1.1: Type: text/plain, Size: 707 bytes --]
I have noticed that in the AM things tend to be much more acceptable. I'll
see what I can do for a PM run, anecdotally it seems from 5 p.m - 10:30
p.m. the bandwidth definitely drops closer to 50/3 avg and the latency
suffers as well, but not as bad as it used to be, so it is reliably better
for me, but I am in a fairly rural cell in Walker Co. Tx, but then again
there is a lot of rural Texans. Meanwhile, here is another cloudflare run
and a waveform one too:
https://www.waveform.com/tools/bufferbloat?test-id=daeaefc6-23fe-43d8-9826-210cfb381f05
On Thu, Jan 12, 2023 at 10:12 AM Dave Taht <dave.taht@gmail.com> wrote:
> Either starlink has vastly improved, or the test is way off in this case.
>
[-- Attachment #1.2: Type: text/html, Size: 1156 bytes --]
[-- Attachment #2: Web capture_12-1-2023_101327_speed.cloudflare.com.jpeg --]
[-- Type: image/jpeg, Size: 219644 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-12 16:12 ` Dave Taht
2023-01-12 16:21 ` Luis A. Cornejo
@ 2023-01-12 17:43 ` MORTON JR., AL
2023-01-13 3:30 ` Luis A. Cornejo
2023-01-13 3:31 ` Luis A. Cornejo
1 sibling, 2 replies; 26+ messages in thread
From: MORTON JR., AL @ 2023-01-12 17:43 UTC (permalink / raw)
To: Dave Taht, Luis A. Cornejo
Cc: Jay Moran, Cake List, IETF IPPM WG, Rpm, bloat, dickroy, libreqos
Dave and Luis,
Do you know if any of these tools are using ~random payloads, to defeat compression?
UDPST has a CLI option:
(m) -X Randomize datagram payload (else zeroes)
When I used this option testing shipboard satellite access, download was about 115kbps.
Al
> -----Original Message-----
> From: Dave Taht <dave.taht@gmail.com>
> Sent: Thursday, January 12, 2023 11:12 AM
> To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>; IETF IPPM
> WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> cloudflare'schristmas present
>
> Either starlink has vastly improved, or the test is way off in this case.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-12 17:43 ` MORTON JR., AL
@ 2023-01-13 3:30 ` Luis A. Cornejo
2023-01-13 3:47 ` Luis A. Cornejo
2023-01-13 4:01 ` Dave Taht
2023-01-13 3:31 ` Luis A. Cornejo
1 sibling, 2 replies; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 3:30 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, Jay Moran, Cake List, IETF IPPM WG, Rpm, bloat,
dickroy, libreqos
[-- Attachment #1.1: Type: text/plain, Size: 1256 bytes --]
Well Reddit has many posts talking about noticeable performance increases
for Starlink. Here is a primetime run:
waveform:
https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
cloudflare attached
On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
> Dave and Luis,
>
> Do you know if any of these tools are using ~random payloads, to defeat
> compression?
>
> UDPST has a CLI option:
> (m) -X Randomize datagram payload (else zeroes)
>
> When I used this option testing shipboard satellite access, download was
> about 115kbps.
>
> Al
>
> > -----Original Message-----
> > From: Dave Taht <dave.taht@gmail.com>
> > Sent: Thursday, January 12, 2023 11:12 AM
> > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>;
> IETF IPPM
> > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> > cloudflare'schristmas present
> >
> > Either starlink has vastly improved, or the test is way off in this case.
>
[-- Attachment #1.2: Type: text/html, Size: 2498 bytes --]
[-- Attachment #2: Web capture_12-1-2023_212657_speed.cloudflare.com.jpeg --]
[-- Type: image/jpeg, Size: 181599 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-12 17:43 ` MORTON JR., AL
2023-01-13 3:30 ` Luis A. Cornejo
@ 2023-01-13 3:31 ` Luis A. Cornejo
1 sibling, 0 replies; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 3:31 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, Jay Moran, Cake List, IETF IPPM WG, Rpm, bloat,
dickroy, libreqos
[-- Attachment #1: Type: text/plain, Size: 1075 bytes --]
Al,
I am not aware of the payload generation.
-Luis
On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
> Dave and Luis,
>
> Do you know if any of these tools are using ~random payloads, to defeat
> compression?
>
> UDPST has a CLI option:
> (m) -X Randomize datagram payload (else zeroes)
>
> When I used this option testing shipboard satellite access, download was
> about 115kbps.
>
> Al
>
> > -----Original Message-----
> > From: Dave Taht <dave.taht@gmail.com>
> > Sent: Thursday, January 12, 2023 11:12 AM
> > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>;
> IETF IPPM
> > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> > cloudflare'schristmas present
> >
> > Either starlink has vastly improved, or the test is way off in this case.
>
[-- Attachment #2: Type: text/html, Size: 2173 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-13 3:30 ` Luis A. Cornejo
@ 2023-01-13 3:47 ` Luis A. Cornejo
2023-01-13 4:01 ` Dave Taht
1 sibling, 0 replies; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 3:47 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, Jay Moran, Cake List, IETF IPPM WG, Rpm, bloat,
dickroy, libreqos
[-- Attachment #1.1: Type: text/plain, Size: 1507 bytes --]
Here is my VZ HSI (LTE)
https://www.waveform.com/tools/bufferbloat?test-id=eaa13c1c-8cd1-48b2-8527-7a0d35772aba
On Thu, Jan 12, 2023 at 9:28 PM Luis A. Cornejo <luis.a.cornejo@gmail.com>
wrote:
> Well Reddit has many posts talking about noticeable performance increases
> for Starlink. Here is a primetime run:
>
> waveform:
>
> https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
>
> cloudflare attached
>
>
>
> On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
>
>> Dave and Luis,
>>
>> Do you know if any of these tools are using ~random payloads, to defeat
>> compression?
>>
>> UDPST has a CLI option:
>> (m) -X Randomize datagram payload (else zeroes)
>>
>> When I used this option testing shipboard satellite access, download was
>> about 115kbps.
>>
>> Al
>>
>> > -----Original Message-----
>> > From: Dave Taht <dave.taht@gmail.com>
>> > Sent: Thursday, January 12, 2023 11:12 AM
>> > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>;
>> IETF IPPM
>> > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
>> > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
>> > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
>> > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
>> > cloudflare'schristmas present
>> >
>> > Either starlink has vastly improved, or the test is way off in this
>> case.
>>
>
[-- Attachment #1.2: Type: text/html, Size: 3178 bytes --]
[-- Attachment #2: Screenshot 2023-01-12 at 21-46-12 Internet Speed Test - Measure Network Performance Cloudflare.png --]
[-- Type: image/png, Size: 641624 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-13 3:30 ` Luis A. Cornejo
2023-01-13 3:47 ` Luis A. Cornejo
@ 2023-01-13 4:01 ` Dave Taht
2023-01-13 14:09 ` Luis A. Cornejo
1 sibling, 1 reply; 26+ messages in thread
From: Dave Taht @ 2023-01-13 4:01 UTC (permalink / raw)
To: Luis A. Cornejo
Cc: MORTON JR.,
AL, Jay Moran, Cake List, IETF IPPM WG, Rpm, bloat, dickroy,
libreqos
On Thu, Jan 12, 2023 at 7:30 PM Luis A. Cornejo
<luis.a.cornejo@gmail.com> wrote:
>
> Well Reddit has many posts talking about noticeable performance increases for Starlink. Here is a primetime run:
>
> waveform:
> https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
That is unquestionably the best result I have ever seen for starlink.
Are you in a position to take a packet capture
of the waveform test, or try some flent based tests?
> cloudflare attached
>
>
>
> On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
>>
>> Dave and Luis,
>>
>> Do you know if any of these tools are using ~random payloads, to defeat compression?
>>
>> UDPST has a CLI option:
>> (m) -X Randomize datagram payload (else zeroes)
>>
>> When I used this option testing shipboard satellite access, download was about 115kbps.
>>
>> Al
>>
>> > -----Original Message-----
>> > From: Dave Taht <dave.taht@gmail.com>
>> > Sent: Thursday, January 12, 2023 11:12 AM
>> > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>; IETF IPPM
>> > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
>> > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
>> > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
>> > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
>> > cloudflare'schristmas present
>> >
>> > Either starlink has vastly improved, or the test is way off in this case.
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Cake] [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets cloudflare'schristmas present
2023-01-13 4:01 ` Dave Taht
@ 2023-01-13 14:09 ` Luis A. Cornejo
0 siblings, 0 replies; 26+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 14:09 UTC (permalink / raw)
To: Dave Taht
Cc: MORTON JR.,
AL, Jay Moran, Cake List, IETF IPPM WG, Rpm, bloat, dickroy,
libreqos
[-- Attachment #1.1: Type: text/plain, Size: 3553 bytes --]
Dave,
I should be able to do that. Might be over the long weekend. I had flent
working on OS X but after an OS update something broke. My Linux box is
down.
Meanwhile in VZ HSI LTE land, got a new IP address overnight to a different
class A network, same AS, no restart. I get this:
cloudflare:
[image: Screenshot 2023-01-13 at 08-04-34 Internet Speed Test - Measure
Network Performance Cloudflare.png]
waveform:
https://www.waveform.com/tools/bufferbloat?test-id=bd07c5ed-7e38-41b0-b0db-d1325cbe189d
Not sure if they lit up a new band, or carrier aggregation or what on my
tower, they really upped the bandwidth! Along with the bufferbloat.
If I put on some SQM with CAKE:
qdisc cake 802a: dev eth2 root refcnt 9 bandwidth 20Mbit diffserv4
dual-srchost nat nowash ack-filter split-gso rtt 100ms noatm overhead 34
qdisc cake 802b: dev ifb4eth2 root refcnt 2 bandwidth 110Mbit diffserv4
dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100ms noatm
overhead 34
cloudflare:
[image: Screenshot 2023-01-13 at 07-53-38 Internet Speed Test - Measure
Network Performance Cloudflare.png]
waveform:
https://www.waveform.com/tools/bufferbloat?test-id=572aaf52-67d7-4fbe-b88b-83fd4525c713
It is almost like VZ has been tracking me and realized that they had to up
their game =P. I'll see how it continues throughout the day with this much
more bandwidth, I am more than willing to sacrifice a bit of it for no
bufferbloat. I guess I can resort to the autorate script as well if the
available bandwidth starts to fluctuate too much.
On Thu, Jan 12, 2023 at 10:01 PM Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Jan 12, 2023 at 7:30 PM Luis A. Cornejo
> <luis.a.cornejo@gmail.com> wrote:
> >
> > Well Reddit has many posts talking about noticeable performance
> increases for Starlink. Here is a primetime run:
> >
> > waveform:
> >
> https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
>
> That is unquestionably the best result I have ever seen for starlink.
> Are you in a position to take a packet capture
> of the waveform test, or try some flent based tests?
>
> > cloudflare attached
> >
> >
> >
> > On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com>
> wrote:
> >>
> >> Dave and Luis,
> >>
> >> Do you know if any of these tools are using ~random payloads, to defeat
> compression?
> >>
> >> UDPST has a CLI option:
> >> (m) -X Randomize datagram payload (else zeroes)
> >>
> >> When I used this option testing shipboard satellite access, download
> was about 115kbps.
> >>
> >> Al
> >>
> >> > -----Original Message-----
> >> > From: Dave Taht <dave.taht@gmail.com>
> >> > Sent: Thursday, January 12, 2023 11:12 AM
> >> > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> >> > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>;
> IETF IPPM
> >> > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> >> > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> >> > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> >> > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> >> > cloudflare'schristmas present
> >> >
> >> > Either starlink has vastly improved, or the test is way off in this
> case.
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
[-- Attachment #1.2: Type: text/html, Size: 5768 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 07-53-38 Internet Speed Test - Measure Network Performance Cloudflare.png --]
[-- Type: image/png, Size: 658097 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 08-04-34 Internet Speed Test - Measure Network Performance Cloudflare.png --]
[-- Type: image/png, Size: 664382 bytes --]
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2023-01-13 14:09 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-04 17:26 [Cake] the grinch meets cloudflare's christmas present Dave Taht
2023-01-04 19:20 ` [Cake] [Rpm] " jf
2023-01-04 20:02 ` rjmcmahon
2023-01-05 11:11 ` [Cake] [Bloat] " Sebastian Moeller
2023-01-06 0:30 ` [Cake] [Starlink] [Bloat] [Rpm] the grinch meets cloudflare'schristmas present Dick Roy
2023-01-06 2:33 ` rjmcmahon
2023-01-06 9:55 ` Sebastian Moeller
2023-01-05 4:25 ` [Cake] [Starlink] [Rpm] the grinch meets cloudflare's christmas present Dick Roy
2023-01-06 16:38 ` [Cake] [LibreQoS] " MORTON JR., AL
2023-01-06 20:38 ` [Cake] [Rpm] " rjmcmahon
2023-01-06 20:47 ` rjmcmahon
2023-01-06 23:29 ` [Cake] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present Dick Roy
2023-01-06 23:45 ` rjmcmahon
2023-01-07 0:31 ` Dick Roy
2023-01-10 17:25 ` [Cake] [Bloat] " Luis A. Cornejo
2023-01-11 5:07 ` [Cake] [Rpm] [Bloat] [Starlink] " Dave Taht
2023-01-11 11:05 ` [Cake] [Bloat] [Rpm] " Jay Moran
2023-01-12 16:03 ` Luis A. Cornejo
2023-01-12 16:12 ` Dave Taht
2023-01-12 16:21 ` Luis A. Cornejo
2023-01-12 17:43 ` MORTON JR., AL
2023-01-13 3:30 ` Luis A. Cornejo
2023-01-13 3:47 ` Luis A. Cornejo
2023-01-13 4:01 ` Dave Taht
2023-01-13 14:09 ` Luis A. Cornejo
2023-01-13 3:31 ` Luis A. Cornejo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox