* [Starlink] the grinch meets cloudflare's christmas present
@ 2023-01-04 17:26 Dave Taht
2023-01-04 19:20 ` [Starlink] [Rpm] " jf
2023-01-06 16:38 ` [Starlink] [LibreQoS] " MORTON JR., AL
0 siblings, 2 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-04 17:26 UTC (permalink / raw)
To: bloat, libreqos, Cake List, Dave Taht via Starlink, Rpm, IETF IPPM WG
[-- Attachment #1: Type: text/plain, Size: 1383 bytes --]
Please try the new, the shiny, the really wonderful test here:
https://speed.cloudflare.com/
I would really appreciate some independent verification of
measurements using this tool. In my brief experiments it appears - as
all the commercial tools to date - to dramatically understate the
bufferbloat, on my LTE, (and my starlink terminal is out being
hacked^H^H^H^H^H^Hworked on, so I can't measure that)
My test of their test reports 223ms 5G latency under load , where
flent reports over 2seconds. See comparison attached.
My guess is that this otherwise lovely new tool, like too many,
doesn't run for long enough. Admittedly, most web objects (their
target market) are small, and so long as they remain small and not
heavily pipelined this test is a very good start... but I'm pretty
sure cloudflare is used for bigger uploads and downloads than that.
There's no way to change the test to run longer either.
I'd love to get some results from other networks (compared as usual to
flent), especially ones with cake on it. I'd love to know if they
measured more minimum rtts that can be obtained with fq_codel or cake,
correctly.
Love Always,
The Grinch
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 256990 bytes --]
[-- Attachment #3: tcp_nup-2023-01-04T090937.211620.LTE.flent.gz --]
[-- Type: application/gzip, Size: 25192 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 17:26 [Starlink] the grinch meets cloudflare's christmas present Dave Taht
@ 2023-01-04 19:20 ` jf
2023-01-04 20:02 ` rjmcmahon
2023-01-06 16:38 ` [Starlink] [LibreQoS] " MORTON JR., AL
1 sibling, 1 reply; 49+ messages in thread
From: jf @ 2023-01-04 19:20 UTC (permalink / raw)
To: Dave Taht
Cc: bloat, libreqos, Cake List, Dave Taht via Starlink, Rpm, IETF IPPM WG
[-- Attachment #1: Type: text/plain, Size: 489 bytes --]
HNY Dave and all the rest,
Great to see yet another capacity test add latency metrics to the results. This one looks like a good start.
Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver reliable low-latency unless you shave a good bit off the targets. My local loop is pretty congested.
Here’s the latest Cloudflare test:
[-- Attachment #2: CFSpeedTest_Gig35_20230104.png --]
[-- Type: image/png, Size: 379539 bytes --]
[-- Attachment #3: Type: text/plain, Size: 41 bytes --]
And an Ookla test run just afterward:
[-- Attachment #4: Speedtest_net_Gig35_20230104.png --]
[-- Type: image/png, Size: 40589 bytes --]
[-- Attachment #5: Type: text/plain, Size: 1897 bytes --]
They are definitely both in the ballpark and correspond to other tests run from the router itself or my (wired) MacBook Pro.
Cheers,
Jonathan
> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
>
> Please try the new, the shiny, the really wonderful test here:
> https://speed.cloudflare.com/
>
> I would really appreciate some independent verification of
> measurements using this tool. In my brief experiments it appears - as
> all the commercial tools to date - to dramatically understate the
> bufferbloat, on my LTE, (and my starlink terminal is out being
> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
> My test of their test reports 223ms 5G latency under load , where
> flent reports over 2seconds. See comparison attached.
>
> My guess is that this otherwise lovely new tool, like too many,
> doesn't run for long enough. Admittedly, most web objects (their
> target market) are small, and so long as they remain small and not
> heavily pipelined this test is a very good start... but I'm pretty
> sure cloudflare is used for bigger uploads and downloads than that.
> There's no way to change the test to run longer either.
>
> I'd love to get some results from other networks (compared as usual to
> flent), especially ones with cake on it. I'd love to know if they
> measured more minimum rtts that can be obtained with fq_codel or cake,
> correctly.
>
> Love Always,
> The Grinch
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 19:20 ` [Starlink] [Rpm] " jf
@ 2023-01-04 20:02 ` rjmcmahon
2023-01-04 21:16 ` Ulrich Speidel
2023-01-05 11:11 ` [Starlink] [Bloat] " Sebastian Moeller
0 siblings, 2 replies; 49+ messages in thread
From: rjmcmahon @ 2023-01-04 20:02 UTC (permalink / raw)
To: jf
Cc: Dave Taht, Dave Taht via Starlink, IETF IPPM WG, libreqos,
Cake List, Rpm, bloat
Curious to why people keep calling capacity tests speed tests? A semi at
55 mph isn't faster than a porsche at 141 mph because its load volume is
larger.
Bob
> HNY Dave and all the rest,
>
> Great to see yet another capacity test add latency metrics to the
> results. This one looks like a good start.
>
> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> reliable low-latency unless you shave a good bit off the targets. My
> local loop is pretty congested.
>
> Here’s the latest Cloudflare test:
>
>
>
>
> And an Ookla test run just afterward:
>
>
>
>
> They are definitely both in the ballpark and correspond to other tests
> run from the router itself or my (wired) MacBook Pro.
>
> Cheers,
>
> Jonathan
>
>
>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>>
>> Please try the new, the shiny, the really wonderful test here:
>> https://speed.cloudflare.com/
>>
>> I would really appreciate some independent verification of
>> measurements using this tool. In my brief experiments it appears - as
>> all the commercial tools to date - to dramatically understate the
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> My test of their test reports 223ms 5G latency under load , where
>> flent reports over 2seconds. See comparison attached.
>>
>> My guess is that this otherwise lovely new tool, like too many,
>> doesn't run for long enough. Admittedly, most web objects (their
>> target market) are small, and so long as they remain small and not
>> heavily pipelined this test is a very good start... but I'm pretty
>> sure cloudflare is used for bigger uploads and downloads than that.
>> There's no way to change the test to run longer either.
>>
>> I'd love to get some results from other networks (compared as usual to
>> flent), especially ones with cake on it. I'd love to know if they
>> measured more minimum rtts that can be obtained with fq_codel or cake,
>> correctly.
>>
>> Love Always,
>> The Grinch
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 20:02 ` rjmcmahon
@ 2023-01-04 21:16 ` Ulrich Speidel
2023-01-04 23:54 ` Bruce Perens
2023-01-05 11:11 ` [Starlink] [Bloat] " Sebastian Moeller
1 sibling, 1 reply; 49+ messages in thread
From: Ulrich Speidel @ 2023-01-04 21:16 UTC (permalink / raw)
To: jf, rjmcmahon; +Cc: Dave Taht via Starlink, bloat
[-- Attachment #1: Type: text/plain, Size: 4507 bytes --]
The use of the term "speed" in communications used to be restricted to the speed of light (or whatever propagation speed one happened to be dealing with. Everything else was a "rate". Maybe I'm old-fashioned but I think talking about "speed tests" muddies the waters rather a lot.
--
****************************************************************
Dr. Ulrich Speidel
Department of Computer Science
Room 303S.594
Ph: (+64-9)-373-7599 ext. 85282
The University of Auckland
u.speidel@auckland.ac.nz<mailto:u.speidel@auckland.ac.nz>
http://www.cs.auckland.ac.nz/~ulrich/<http://www.cs.auckland.ac.nz/%7Eulrich/>
****************************************************************
________________________________
From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
Sent: Thursday, January 5, 2023 9:02 AM
To: jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
Cc: Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG <ippm@ietf.org>; libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>; Rpm <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
Curious to why people keep calling capacity tests speed tests? A semi at
55 mph isn't faster than a porsche at 141 mph because its load volume is
larger.
Bob
> HNY Dave and all the rest,
>
> Great to see yet another capacity test add latency metrics to the
> results. This one looks like a good start.
>
> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> reliable low-latency unless you shave a good bit off the targets. My
> local loop is pretty congested.
>
> Here’s the latest Cloudflare test:
>
>
>
>
> And an Ookla test run just afterward:
>
>
>
>
> They are definitely both in the ballpark and correspond to other tests
> run from the router itself or my (wired) MacBook Pro.
>
> Cheers,
>
> Jonathan
>
>
>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>>
>> Please try the new, the shiny, the really wonderful test here:
>> https://speed.cloudflare.com/<https://speed.cloudflare.com>
>>
>> I would really appreciate some independent verification of
>> measurements using this tool. In my brief experiments it appears - as
>> all the commercial tools to date - to dramatically understate the
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> My test of their test reports 223ms 5G latency under load , where
>> flent reports over 2seconds. See comparison attached.
>>
>> My guess is that this otherwise lovely new tool, like too many,
>> doesn't run for long enough. Admittedly, most web objects (their
>> target market) are small, and so long as they remain small and not
>> heavily pipelined this test is a very good start... but I'm pretty
>> sure cloudflare is used for bigger uploads and downloads than that.
>> There's no way to change the test to run longer either.
>>
>> I'd love to get some results from other networks (compared as usual to
>> flent), especially ones with cake on it. I'd love to know if they
>> measured more minimum rtts that can be obtained with fq_codel or cake,
>> correctly.
>>
>> Love Always,
>> The Grinch
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz<https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz>
>> Dave Täht CEO, TekLibre, LLC
>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm<https://lists.bufferbloat.net/listinfo/rpm>
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm<https://lists.bufferbloat.net/listinfo/rpm>
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink<https://lists.bufferbloat.net/listinfo/starlink>
[-- Attachment #2: Type: text/html, Size: 6806 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 21:16 ` Ulrich Speidel
@ 2023-01-04 23:54 ` Bruce Perens
2023-01-05 2:48 ` Dave Collier-Brown
2023-01-05 6:11 ` rjmcmahon
0 siblings, 2 replies; 49+ messages in thread
From: Bruce Perens @ 2023-01-04 23:54 UTC (permalink / raw)
To: Ulrich Speidel; +Cc: jf, rjmcmahon, Dave Taht via Starlink, bloat
[-- Attachment #1: Type: text/plain, Size: 4941 bytes --]
On the other hand, we would like to be comprehensible to normal users,
especially when we want them to press their providers to deal with
bufferbloat. Differences like speed and rate would go right over their
heads.
On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink <
starlink@lists.bufferbloat.net> wrote:
> The use of the term "speed" in communications used to be restricted to the
> speed of light (or whatever propagation speed one happened to be dealing
> with. Everything else was a "rate". Maybe I'm old-fashioned but I think
> talking about "speed tests" muddies the waters rather a lot.
>
> --
> ****************************************************************
> Dr. Ulrich Speidel
>
> Department of Computer Science
>
> Room 303S.594
> Ph: (+64-9)-373-7599 ext. 85282
>
> The University of Auckland
> u.speidel@auckland.ac.nz
> http://www.cs.auckland.ac.nz/~ulrich/
> ****************************************************************
> ------------------------------
> *From:* Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
> *Sent:* Thursday, January 5, 2023 9:02 AM
> *To:* jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
> *Cc:* Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG <ippm@ietf.org>;
> libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink <
> starlink@lists.bufferbloat.net>; Rpm <rpm@lists.bufferbloat.net>; bloat <
> bloat@lists.bufferbloat.net>
> *Subject:* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
> present
>
> Curious to why people keep calling capacity tests speed tests? A semi at
> 55 mph isn't faster than a porsche at 141 mph because its load volume is
> larger.
>
> Bob
> > HNY Dave and all the rest,
> >
> > Great to see yet another capacity test add latency metrics to the
> > results. This one looks like a good start.
> >
> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> > is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> > (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> > reliable low-latency unless you shave a good bit off the targets. My
> > local loop is pretty congested.
> >
> > Here’s the latest Cloudflare test:
> >
> >
> >
> >
> > And an Ookla test run just afterward:
> >
> >
> >
> >
> > They are definitely both in the ballpark and correspond to other tests
> > run from the router itself or my (wired) MacBook Pro.
> >
> > Cheers,
> >
> > Jonathan
> >
> >
> >> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
> >> <rpm@lists.bufferbloat.net> wrote:
> >>
> >> Please try the new, the shiny, the really wonderful test here:
> >> https://speed.cloudflare.com/
> >>
> >> I would really appreciate some independent verification of
> >> measurements using this tool. In my brief experiments it appears - as
> >> all the commercial tools to date - to dramatically understate the
> >> bufferbloat, on my LTE, (and my starlink terminal is out being
> >> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> >>
> >> My test of their test reports 223ms 5G latency under load , where
> >> flent reports over 2seconds. See comparison attached.
> >>
> >> My guess is that this otherwise lovely new tool, like too many,
> >> doesn't run for long enough. Admittedly, most web objects (their
> >> target market) are small, and so long as they remain small and not
> >> heavily pipelined this test is a very good start... but I'm pretty
> >> sure cloudflare is used for bigger uploads and downloads than that.
> >> There's no way to change the test to run longer either.
> >>
> >> I'd love to get some results from other networks (compared as usual to
> >> flent), especially ones with cake on it. I'd love to know if they
> >> measured more minimum rtts that can be obtained with fq_codel or cake,
> >> correctly.
> >>
> >> Love Always,
> >> The Grinch
> >>
> >> --
> >> This song goes out to all the folk that thought Stadia would work:
> >>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> >> Dave Täht CEO, TekLibre, LLC
> >>
> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> >> Rpm mailing list
> >> Rpm@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/rpm
> >
> >
> > _______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
Bruce Perens K6BP
[-- Attachment #2: Type: text/html, Size: 8553 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 23:54 ` Bruce Perens
@ 2023-01-05 2:48 ` Dave Collier-Brown
2023-01-05 3:11 ` Dick Roy
2023-01-05 6:11 ` rjmcmahon
1 sibling, 1 reply; 49+ messages in thread
From: Dave Collier-Brown @ 2023-01-05 2:48 UTC (permalink / raw)
To: starlink
[-- Attachment #1: Type: text/plain, Size: 7002 bytes --]
I think using "speed" for "the inverse of delay" is pretty normal English, if technically erroneous when speaking nerd or physicist.
Using it for volume? Arguably more like fraudulent...
--dave
On 1/4/23 18:54, Bruce Perens via Starlink wrote:
On the other hand, we would like to be comprehensible to normal users, especially when we want them to press their providers to deal with bufferbloat. Differences like speed and rate would go right over their heads.
On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>> wrote:
The use of the term "speed" in communications used to be restricted to the speed of light (or whatever propagation speed one happened to be dealing with. Everything else was a "rate". Maybe I'm old-fashioned but I think talking about "speed tests" muddies the waters rather a lot.
--
****************************************************************
Dr. Ulrich Speidel
Department of Computer Science
Room 303S.594
Ph: (+64-9)-373-7599 ext. 85282
The University of Auckland
u.speidel@auckland.ac.nz<mailto:u.speidel@auckland.ac.nz>
http://www.cs.auckland.ac.nz/~ulrich/<http://www.cs.auckland.ac.nz/%7Eulrich/>
****************************************************************
________________________________
From: Starlink <starlink-bounces@lists.bufferbloat.net<mailto:starlink-bounces@lists.bufferbloat.net>> on behalf of rjmcmahon via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>
Sent: Thursday, January 5, 2023 9:02 AM
To: jf@jonathanfoulkes.com<mailto:jf@jonathanfoulkes.com> <jf@jonathanfoulkes.com<mailto:jf@jonathanfoulkes.com>>
Cc: Cake List <cake@lists.bufferbloat.net<mailto:cake@lists.bufferbloat.net>>; IETF IPPM WG <ippm@ietf.org<mailto:ippm@ietf.org>>; libreqos <libreqos@lists.bufferbloat.net<mailto:libreqos@lists.bufferbloat.net>>; Dave Taht via Starlink <starlink@lists.bufferbloat.net<mailto:starlink@lists.bufferbloat.net>>; Rpm <rpm@lists.bufferbloat.net<mailto:rpm@lists.bufferbloat.net>>; bloat <bloat@lists.bufferbloat.net<mailto:bloat@lists.bufferbloat.net>>
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
Curious to why people keep calling capacity tests speed tests? A semi at
55 mph isn't faster than a porsche at 141 mph because its load volume is
larger.
Bob
> HNY Dave and all the rest,
>
> Great to see yet another capacity test add latency metrics to the
> results. This one looks like a good start.
>
> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> reliable low-latency unless you shave a good bit off the targets. My
> local loop is pretty congested.
>
> Here’s the latest Cloudflare test:
>
>
>
>
> And an Ookla test run just afterward:
>
>
>
>
> They are definitely both in the ballpark and correspond to other tests
> run from the router itself or my (wired) MacBook Pro.
>
> Cheers,
>
> Jonathan
>
>
>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>> <rpm@lists.bufferbloat.net<mailto:rpm@lists.bufferbloat.net>> wrote:
>>
>> Please try the new, the shiny, the really wonderful test here:
>> https://speed.cloudflare.com/<https://speed.cloudflare.com>
>>
>> I would really appreciate some independent verification of
>> measurements using this tool. In my brief experiments it appears - as
>> all the commercial tools to date - to dramatically understate the
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> My test of their test reports 223ms 5G latency under load , where
>> flent reports over 2seconds. See comparison attached.
>>
>> My guess is that this otherwise lovely new tool, like too many,
>> doesn't run for long enough. Admittedly, most web objects (their
>> target market) are small, and so long as they remain small and not
>> heavily pipelined this test is a very good start... but I'm pretty
>> sure cloudflare is used for bigger uploads and downloads than that.
>> There's no way to change the test to run longer either.
>>
>> I'd love to get some results from other networks (compared as usual to
>> flent), especially ones with cake on it. I'd love to know if they
>> measured more minimum rtts that can be obtained with fq_codel or cake,
>> correctly.
>>
>> Love Always,
>> The Grinch
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net<mailto:Rpm@lists.bufferbloat.net>
>> https://lists.bufferbloat.net/listinfo/rpm
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net<mailto:Rpm@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/rpm
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net<mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net<mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink
--
Bruce Perens K6BP
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net<mailto:Starlink@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/starlink
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dave.collier-brown@indexexchange.com<mailto:dave.collier-brown@indexexchange.com> | -- Mark Twain
CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
[-- Attachment #2: Type: text/html, Size: 12185 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-05 2:48 ` Dave Collier-Brown
@ 2023-01-05 3:11 ` Dick Roy
2023-01-05 11:25 ` Sebastian Moeller
0 siblings, 1 reply; 49+ messages in thread
From: Dick Roy @ 2023-01-05 3:11 UTC (permalink / raw)
To: 'Dave Collier-Brown'; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 7240 bytes --]
_____
From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
Dave Collier-Brown via Starlink
Sent: Wednesday, January 4, 2023 6:48 PM
To: starlink@lists.bufferbloat.net
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present
I think using "speed" for "the inverse of delay" is pretty normal English,
if technically erroneous when speaking nerd or physicist.
[RR] Ive not heard of that usage before. The units arent commensurate
either.
Using it for volume? Arguably more like fraudulent...
[RR] I dont think that was Bobs intent. I think load volume was meant
to be a metaphor for number of bits/bytes being transported (by the
semi).
That said, arent users these days educated on gigs which they intuitively
understand to be Gigabits per second (or Gbps)? Oddly enough, that is an
expression of data/information/communication rate in the appropriate units
with the nominal technically correct meaning.
RR
--dave
On 1/4/23 18:54, Bruce Perens via Starlink wrote:
On the other hand, we would like to be comprehensible to normal users,
especially when we want them to press their providers to deal with
bufferbloat. Differences like speed and rate would go right over their
heads.
On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink
<starlink@lists.bufferbloat.net> wrote:
The use of the term "speed" in communications used to be restricted to the
speed of light (or whatever propagation speed one happened to be dealing
with. Everything else was a "rate". Maybe I'm old-fashioned but I think
talking about "speed tests" muddies the waters rather a lot.
--
****************************************************************
Dr. Ulrich Speidel
Department of Computer Science
Room 303S.594
Ph: (+64-9)-373-7599 ext. 85282
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
<http://www.cs.auckland.ac.nz/%7Eulrich/>
****************************************************************
_____
From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
Sent: Thursday, January 5, 2023 9:02 AM
To: jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
Cc: Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG <ippm@ietf.org>;
libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink
<starlink@lists.bufferbloat.net>; Rpm <rpm@lists.bufferbloat.net>; bloat
<bloat@lists.bufferbloat.net>
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present
Curious to why people keep calling capacity tests speed tests? A semi at
55 mph isn't faster than a porsche at 141 mph because its load volume is
larger.
Bob
> HNY Dave and all the rest,
>
> Great to see yet another capacity test add latency metrics to the
> results. This one looks like a good start.
>
> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> (an i5 x86) with Cake set for 710/31 as this ISP cant deliver
> reliable low-latency unless you shave a good bit off the targets. My
> local loop is pretty congested.
>
> Heres the latest Cloudflare test:
>
>
>
>
> And an Ookla test run just afterward:
>
>
>
>
> They are definitely both in the ballpark and correspond to other tests
> run from the router itself or my (wired) MacBook Pro.
>
> Cheers,
>
> Jonathan
>
>
>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>> <rpm@lists.bufferbloat.net> wrote:
>>
>> Please try the new, the shiny, the really wonderful test here:
>> https://speed.cloudflare.com/ <https://speed.cloudflare.com>
>>
>> I would really appreciate some independent verification of
>> measurements using this tool. In my brief experiments it appears - as
>> all the commercial tools to date - to dramatically understate the
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> My test of their test reports 223ms 5G latency under load , where
>> flent reports over 2seconds. See comparison attached.
>>
>> My guess is that this otherwise lovely new tool, like too many,
>> doesn't run for long enough. Admittedly, most web objects (their
>> target market) are small, and so long as they remain small and not
>> heavily pipelined this test is a very good start... but I'm pretty
>> sure cloudflare is used for bigger uploads and downloads than that.
>> There's no way to change the test to run longer either.
>>
>> I'd love to get some results from other networks (compared as usual to
>> flent), especially ones with cake on it. I'd love to know if they
>> measured more minimum rtts that can be obtained with fq_codel or cake,
>> correctly.
>>
>> Love Always,
>> The Grinch
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698136666560
7352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>>
<image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>__________________
_____________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
>
>
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
--
Bruce Perens K6BP
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dave.collier-brown@indexexchange.com | -- Mark Twain
CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including
any and all attachments, contains confidential information intended only for
the person(s) to whom it is addressed. Any dissemination, distribution,
copying or disclosure is strictly prohibited and is not a waiver of
confidentiality. If you have received this telecommunication in error,
please notify the sender immediately by return electronic mail and delete
the message from your inbox and deleted items folders. This
telecommunication does not constitute an express or implied agreement to
conduct transactions by electronic means, nor does it constitute a contract
offer, a contract amendment or an acceptance of a contract offer. Contract
terms contained in this telecommunication are subject to legal review and
the completion of formal documentation and are not binding until same is
confirmed in writing and has been signed by an authorized signatory.
[-- Attachment #2: Type: text/html, Size: 19394 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 23:54 ` Bruce Perens
2023-01-05 2:48 ` Dave Collier-Brown
@ 2023-01-05 6:11 ` rjmcmahon
1 sibling, 0 replies; 49+ messages in thread
From: rjmcmahon @ 2023-01-05 6:11 UTC (permalink / raw)
To: Bruce Perens; +Cc: Ulrich Speidel, jf, Dave Taht via Starlink, bloat
The thing that works for gamers are colors, e.g. green, yellow and red.
Basically, if the game slows down to a bothersome experience the
"latency indicator" goes from green to yellow. If the game slows down to
be unplayable it goes to red and the "phone" mfg gets lots of
complaints. Why we call a handheld computer a phone is a whole other
discussion.
Bob
> On the other hand, we would like to be comprehensible to normal users,
> especially when we want them to press their providers to deal with
> bufferbloat. Differences like speed and rate would go right over their
> heads.
>
> On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink
> <starlink@lists.bufferbloat.net> wrote:
>
>> The use of the term "speed" in communications used to be restricted
>> to the speed of light (or whatever propagation speed one happened to
>> be dealing with. Everything else was a "rate". Maybe I'm
>> old-fashioned but I think talking about "speed tests" muddies the
>> waters rather a lot.
>>
>> --
>> ****************************************************************
>> Dr. Ulrich Speidel
>>
>> Department of Computer Science
>>
>> Room 303S.594
>> Ph: (+64-9)-373-7599 ext. 85282
>>
>> The University of Auckland
>> u.speidel@auckland.ac.nz
>> http://www.cs.auckland.ac.nz/~ulrich/ [1]
>> ****************************************************************
>>
>> -------------------------
>>
>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>> rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
>> Sent: Thursday, January 5, 2023 9:02 AM
>> To: jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
>> Cc: Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG
>> <ippm@ietf.org>; libreqos <libreqos@lists.bufferbloat.net>; Dave
>> Taht via Starlink <starlink@lists.bufferbloat.net>; Rpm
>> <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
>> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's
>> christmas present
>>
>> Curious to why people keep calling capacity tests speed tests? A
>> semi at
>> 55 mph isn't faster than a porsche at 141 mph because its load
>> volume is
>> larger.
>>
>> Bob
>>> HNY Dave and all the rest,
>>>
>>> Great to see yet another capacity test add latency metrics to the
>>> results. This one looks like a good start.
>>>
>>> Results from my Windstream DOCSIS 3.1 line (3.1 on download only,
>> up
>>> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter
>> Pro
>>> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
>>> reliable low-latency unless you shave a good bit off the targets.
>> My
>>> local loop is pretty congested.
>>>
>>> Here’s the latest Cloudflare test:
>>>
>>>
>>>
>>>
>>> And an Ookla test run just afterward:
>>>
>>>
>>>
>>>
>>> They are definitely both in the ballpark and correspond to other
>> tests
>>> run from the router itself or my (wired) MacBook Pro.
>>>
>>> Cheers,
>>>
>>> Jonathan
>>>
>>>
>>>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>>>> <rpm@lists.bufferbloat.net> wrote:
>>>>
>>>> Please try the new, the shiny, the really wonderful test here:
>>>> https://speed.cloudflare.com/ [2]
>>>>
>>>> I would really appreciate some independent verification of
>>>> measurements using this tool. In my brief experiments it appears
>> - as
>>>> all the commercial tools to date - to dramatically understate the
>>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>>>
>>>> My test of their test reports 223ms 5G latency under load , where
>>>> flent reports over 2seconds. See comparison attached.
>>>>
>>>> My guess is that this otherwise lovely new tool, like too many,
>>>> doesn't run for long enough. Admittedly, most web objects (their
>>>> target market) are small, and so long as they remain small and
>> not
>>>> heavily pipelined this test is a very good start... but I'm
>> pretty
>>>> sure cloudflare is used for bigger uploads and downloads than
>> that.
>>>> There's no way to change the test to run longer either.
>>>>
>>>> I'd love to get some results from other networks (compared as
>> usual to
>>>> flent), especially ones with cake on it. I'd love to know if they
>>>> measured more minimum rtts that can be obtained with fq_codel or
>> cake,
>>>> correctly.
>>>>
>>>> Love Always,
>>>> The Grinch
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would
>> work:
>>>>
>>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> [3]
>>>> Dave Täht CEO, TekLibre, LLC
>>>>
>>
> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>>>> Rpm mailing list
>>>> Rpm@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/rpm [4]
>>>
>>>
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm [4]
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> --
>
> Bruce Perens K6BP
>
> Links:
> ------
> [1] http://www.cs.auckland.ac.nz/%7Eulrich/
> [2] https://speed.cloudflare.com
> [3]
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> [4] https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Bloat] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 20:02 ` rjmcmahon
2023-01-04 21:16 ` Ulrich Speidel
@ 2023-01-05 11:11 ` Sebastian Moeller
1 sibling, 0 replies; 49+ messages in thread
From: Sebastian Moeller @ 2023-01-05 11:11 UTC (permalink / raw)
To: rjmcmahon
Cc: jf, Cake List, IETF IPPM WG, libreqos, Dave Taht via Starlink,
Rpm, bloat
Hi Bob,
> On Jan 4, 2023, at 21:02, rjmcmahon via Bloat <bloat@lists.bufferbloat.net> wrote:
>
> Curious to why people keep calling capacity tests speed tests? A semi at 55 mph isn't faster than a porsche at 141 mph because its load volume is larger.
[SM] I am not sure whether answering the "why" is likely to getting us closer to remedy the situation. IMHO we are unlikely to change that just as we are unlikely to change the equally debatable use of "bandwidth" as synonym for "maximal capacity"... These two ships have sailed no matter how much shouting at clouds is going to happen ;)
My theory about the way is, this is entirely marketing driven, both device manufacturers/ISPs and end-users desire to keep things simple so ideally a single number and a catchy name... "Speed" as in top-speed was already a well known quantity for motor vehicles that consumers as a group had accepted to correlate with price. Now purist will say that "speed" is already well-defined as distance/time and "amount of data" is not a viable distance measure (how many bits are there in a meter?), but since when has marketing and the desire for simply single-number "quality indicators" ever cared much for the complexities of the real world?
Also when remembering the old analog modem and ISDN days, at that time additional capacity truly was my main desirable, so marketing by max capacity was relevant to me independent of how this was called, I would not be amazed if I was not alone with that view. I guess that single measure and the wrong name simply stuck...
Personally I try to use rate instead of speed or bandwidth, but I note that I occasionally fail without even noticing it.
Technically I agree that one way latency is more closely related to "speed" as between any two end-points there is always a path the information travels that has a "true" length, so speed could be defined as network-path-length/OWD, but that would only be the average speed over the path... I am not sure how informative or marketable this wuld be for end-users though ;)
Regards
Sebastian
>
> Bob
>> HNY Dave and all the rest,
>> Great to see yet another capacity test add latency metrics to the
>> results. This one looks like a good start.
>> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
>> is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
>> (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
>> reliable low-latency unless you shave a good bit off the targets. My
>> local loop is pretty congested.
>> Here’s the latest Cloudflare test:
>> And an Ookla test run just afterward:
>> They are definitely both in the ballpark and correspond to other tests
>> run from the router itself or my (wired) MacBook Pro.
>> Cheers,
>> Jonathan
>>> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm <rpm@lists.bufferbloat.net> wrote:
>>> Please try the new, the shiny, the really wonderful test here:
>>> https://speed.cloudflare.com/
>>> I would really appreciate some independent verification of
>>> measurements using this tool. In my brief experiments it appears - as
>>> all the commercial tools to date - to dramatically understate the
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>> My test of their test reports 223ms 5G latency under load , where
>>> flent reports over 2seconds. See comparison attached.
>>> My guess is that this otherwise lovely new tool, like too many,
>>> doesn't run for long enough. Admittedly, most web objects (their
>>> target market) are small, and so long as they remain small and not
>>> heavily pipelined this test is a very good start... but I'm pretty
>>> sure cloudflare is used for bigger uploads and downloads than that.
>>> There's no way to change the test to run longer either.
>>> I'd love to get some results from other networks (compared as usual to
>>> flent), especially ones with cake on it. I'd love to know if they
>>> measured more minimum rtts that can be obtained with fq_codel or cake,
>>> correctly.
>>> Love Always,
>>> The Grinch
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> Dave Täht CEO, TekLibre, LLC
>>> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>>> Rpm mailing list
>>> Rpm@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-05 3:11 ` Dick Roy
@ 2023-01-05 11:25 ` Sebastian Moeller
2023-01-06 0:01 ` Dick Roy
0 siblings, 1 reply; 49+ messages in thread
From: Sebastian Moeller @ 2023-01-05 11:25 UTC (permalink / raw)
To: Dick Roy; +Cc: Dave Collier-Brown, starlink
Hi RR,
> On Jan 5, 2023, at 04:11, Dick Roy via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
>
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of Dave Collier-Brown via Starlink
> Sent: Wednesday, January 4, 2023 6:48 PM
> To: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
>
> I think using "speed" for "the inverse of delay" is pretty normal English, if technically erroneous when speaking nerd or physicist.
>
> [RR] I’ve not heard of that usage before. The units aren’t commensurate either.
>
> Using it for volume? Arguably more like fraudulent...
>
> [RR] I don’t think that was Bob’s intent. I think “load volume” was meant to be a metaphor for “number of bits/bytes” being transported (“by the semi”).
>
> That said, aren’t users these days educated on “gigs” which they intuitively understand to be Gigabits per second (or Gbps)? Oddly enough, that is an expression of “data/information/communication rate” in the appropriate units with the nominal technically correct meaning.
[SM] Gigs would have the following confounds if used without a proper definition:
a) base10 or base2^10?
b) giga-what? Bit or Byte
c) Volume or capacity
d) if capacity, minimal, average, or maximal?
I note (again, sorry to sound like a broken record) that the national regulatory agency for networks (Bundes-Netzagentur, short BNetzA) in Germany has some detailed instructions about what information ISPs need to supply to their potential customers pre-sale (see https://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Sachgebiete/Telekommunikation/Unternehmen_Institutionen/Anbieterpflichten/Kundenschutz/Transparenzmaßnahmen/Instruction_for_drawing_up_PIS.pdf?__blob=publicationFile&v=1) where the headlines talk correctly about "data transmission rates" but in the text they occasionally fall back to "speed". They also state: "Data transmission rates must be given in megabits per second (Mbit/s)."
This is both in response to our "speed" discussion, but also one potential way to clarify b) c) and d) above... given that is official this probably also answers a) (base10 otherwise the text would be "Data transmission rates must be given in mebibits per second (Mibit/s).")
--Sebastian
>
> RR
>
> --dave
>
> On 1/4/23 18:54, Bruce Perens via Starlink wrote:
>> On the other hand, we would like to be comprehensible to normal users, especially when we want them to press their providers to deal with bufferbloat. Differences like speed and rate would go right over their heads.
>>
>> On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink <starlink@lists.bufferbloat.net> wrote:
>>> The use of the term "speed" in communications used to be restricted to the speed of light (or whatever propagation speed one happened to be dealing with. Everything else was a "rate". Maybe I'm old-fashioned but I think talking about "speed tests" muddies the waters rather a lot.
>>>
>>> --
>>> ****************************************************************
>>> Dr. Ulrich Speidel
>>>
>>> Department of Computer Science
>>>
>>> Room 303S.594
>>> Ph: (+64-9)-373-7599 ext. 85282
>>>
>>> The University of Auckland
>>> u.speidel@auckland.ac.nz
>>> http://www.cs.auckland.ac.nz/~ulrich/
>>> ****************************************************************
>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
>>> Sent: Thursday, January 5, 2023 9:02 AM
>>> To: jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
>>> Cc: Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG <ippm@ietf.org>; libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>; Rpm <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
>>>
>>> Curious to why people keep calling capacity tests speed tests? A semi at
>>> 55 mph isn't faster than a porsche at 141 mph because its load volume is
>>> larger.
>>>
>>> Bob
>>> > HNY Dave and all the rest,
>>> >
>>> > Great to see yet another capacity test add latency metrics to the
>>> > results. This one looks like a good start.
>>> >
>>> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
>>> > is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
>>> > (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
>>> > reliable low-latency unless you shave a good bit off the targets. My
>>> > local loop is pretty congested.
>>> >
>>> > Here’s the latest Cloudflare test:
>>> >
>>> >
>>> >
>>> >
>>> > And an Ookla test run just afterward:
>>> >
>>> >
>>> >
>>> >
>>> > They are definitely both in the ballpark and correspond to other tests
>>> > run from the router itself or my (wired) MacBook Pro.
>>> >
>>> > Cheers,
>>> >
>>> > Jonathan
>>> >
>>> >
>>> >> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>>> >> <rpm@lists.bufferbloat.net> wrote:
>>> >>
>>> >> Please try the new, the shiny, the really wonderful test here:
>>> >> https://speed.cloudflare.com/
>>> >>
>>> >> I would really appreciate some independent verification of
>>> >> measurements using this tool. In my brief experiments it appears - as
>>> >> all the commercial tools to date - to dramatically understate the
>>> >> bufferbloat, on my LTE, (and my starlink terminal is out being
>>> >> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>> >>
>>> >> My test of their test reports 223ms 5G latency under load , where
>>> >> flent reports over 2seconds. See comparison attached.
>>> >>
>>> >> My guess is that this otherwise lovely new tool, like too many,
>>> >> doesn't run for long enough. Admittedly, most web objects (their
>>> >> target market) are small, and so long as they remain small and not
>>> >> heavily pipelined this test is a very good start... but I'm pretty
>>> >> sure cloudflare is used for bigger uploads and downloads than that.
>>> >> There's no way to change the test to run longer either.
>>> >>
>>> >> I'd love to get some results from other networks (compared as usual to
>>> >> flent), especially ones with cake on it. I'd love to know if they
>>> >> measured more minimum rtts that can be obtained with fq_codel or cake,
>>> >> correctly.
>>> >>
>>> >> Love Always,
>>> >> The Grinch
>>> >>
>>> >> --
>>> >> This song goes out to all the folk that thought Stadia would work:
>>> >> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> >> Dave Täht CEO, TekLibre, LLC
>>> >> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>>> >> Rpm mailing list
>>> >> Rpm@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/rpm
>>> >
>>> >
>>> > _______________________________________________
>>> > Rpm mailing list
>>> > Rpm@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/rpm
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> --
>> Bruce Perens K6BP
>>
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> --
> David Collier-Brown, | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> dave.collier-brown@indexexchange.com | -- Mark Twain
>
> CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-05 11:25 ` Sebastian Moeller
@ 2023-01-06 0:01 ` Dick Roy
2023-01-06 9:43 ` Sebastian Moeller
0 siblings, 1 reply; 49+ messages in thread
From: Dick Roy @ 2023-01-06 0:01 UTC (permalink / raw)
To: 'Sebastian Moeller'; +Cc: 'Dave Collier-Brown', starlink
[-- Attachment #1: Type: text/plain, Size: 11785 bytes --]
Hi Sebastian,
See below
-----Original Message-----
From: Sebastian Moeller [mailto:moeller0@gmx.de]
Sent: Thursday, January 5, 2023 3:26 AM
To: Dick Roy
Cc: Dave Collier-Brown; starlink@lists.bufferbloat.net
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present
Hi RR,
> On Jan 5, 2023, at 04:11, Dick Roy via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
>
>
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf
Of Dave Collier-Brown via Starlink
> Sent: Wednesday, January 4, 2023 6:48 PM
> To: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present
>
> I think using "speed" for "the inverse of delay" is pretty normal English,
if technically erroneous when speaking nerd or physicist.
>
> [RR] Ive not heard of that usage before. The units arent commensurate
either.
>
> Using it for volume? Arguably more like fraudulent...
>
> [RR] I dont think that was Bobs intent. I think load volume was meant
to be a metaphor for number of bits/bytes being transported (by the
semi).
>
> That said, arent users these days educated on gigs which they
intuitively understand to be Gigabits per second (or Gbps)? Oddly enough,
that is an expression of data/information/communication rate in the
appropriate units with the nominal technically correct meaning.
[SM] Gigs would have the following confounds if used without a proper
definition:
a) base10 or base2^10?
b) giga-what? Bit or Byte
c) Volume or capacity
d) if capacity, minimal, average, or maximal?
I note (again, sorry to sound like a broken record) that the national
regulatory agency for networks (Bundes-Netzagentur, short BNetzA) in Germany
has some detailed instructions about what information ISPs need to supply to
their potential customers pre-sale (see
https://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Sachgebiete/Telekom
munikation/Unternehmen_Institutionen/Anbieterpflichten/Kundenschutz/Transpar
enzmaßnahmen/Instruction_for_drawing_up_PIS.pdf?__blob=publicationFile&v=1)
where the headlines talk correctly about "data transmission rates" but in
the text they occasionally fall back to "speed". They also state: "Data
transmission rates must be given in megabits per second (Mbit/s)."
This is both in response to our "speed" discussion, but also one potential
way to clarify b) c) and d) above... given that is official this probably
also answers a) (base10 otherwise the text would be "Data transmission rates
must be given in mebibits per second (Mibit/s).")
[RR] My reference to gigs was to the ads out nowadays from AT&T about
becoming Gagillionaires (Yes, I am Jurgous.
We know!) that now have gig
speed wireless from AT&T so they can play all kinds of VR games. :-) That
said, not sure why BNetzA mandates a particular unit for information rates,
but thats their prerogative I guess. Given that the fundamental unit of
information is the answer to a YES/NO question (aka a bit), it makes sense
to measure information in bits (although trits or any other higher order
concept could be used as long as the system accounted for fractions
thereof:-)) (and sets of bits (aka bytes or really octets) because of
ancient computer arcitectures:-)). Since we have pretty much settled on the
SI second as the accepted unit of time (and multiples thereof e.g. msec,
usec, nsec, etc.), it makes sense to measure information flow in bits/sec or
some multiples thereof such as Gbps, Mbps, Kbps, etc. and their byte (really
octet) versions GBps, MBps, KBps, etc.. Not sure why BNetzA mandates ONLY
one of these, but whatever
:-)
As for capacity, remember capacity is not something that is measured. It is
a fundamental property (an information rate!) of a communication channel
which has no other attributes such as minimal, average, or maximal (unless
one is talking about time-varying channels and is wanting to characterize
the capacity of the channel over time, but thats another story). As such,
comparing volume and capacity is comparing apples and oranges; one is a size
of something (e.g. number of megabytes) and the other is a rate (e.g. MBps)
so I am not sure what Volume or capacity really means. I suspect the
concept you may be looking for is achievable rate rather than capacity.
Achievable rate IS something that is measureable, and varies with load when
channels are shared, etc.. Loosely speaking, achievable rate is always less
than or equal to the capacity of a channel.
HNY,
RR
--Sebastian
>
> RR
>
> --dave
>
> On 1/4/23 18:54, Bruce Perens via Starlink wrote:
>> On the other hand, we would like to be comprehensible to normal users,
especially when we want them to press their providers to deal with
bufferbloat. Differences like speed and rate would go right over their
heads.
>>
>> On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink
<starlink@lists.bufferbloat.net> wrote:
>>> The use of the term "speed" in communications used to be restricted to
the speed of light (or whatever propagation speed one happened to be dealing
with. Everything else was a "rate". Maybe I'm old-fashioned but I think
talking about "speed tests" muddies the waters rather a lot.
>>>
>>> --
>>> ****************************************************************
>>> Dr. Ulrich Speidel
>>>
>>> Department of Computer Science
>>>
>>> Room 303S.594
>>> Ph: (+64-9)-373-7599 ext. 85282
>>>
>>> The University of Auckland
>>> u.speidel@auckland.ac.nz
>>> http://www.cs.auckland.ac.nz/~ulrich/
>>> ****************************************************************
>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
>>> Sent: Thursday, January 5, 2023 9:02 AM
>>> To: jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
>>> Cc: Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG
<ippm@ietf.org>; libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via
Starlink <starlink@lists.bufferbloat.net>; Rpm <rpm@lists.bufferbloat.net>;
bloat <bloat@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present
>>>
>>> Curious to why people keep calling capacity tests speed tests? A semi at
>>> 55 mph isn't faster than a porsche at 141 mph because its load volume is
>>> larger.
>>>
>>> Bob
>>> > HNY Dave and all the rest,
>>> >
>>> > Great to see yet another capacity test add latency metrics to the
>>> > results. This one looks like a good start.
>>> >
>>> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
>>> > is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
>>> > (an i5 x86) with Cake set for 710/31 as this ISP cant deliver
>>> > reliable low-latency unless you shave a good bit off the targets. My
>>> > local loop is pretty congested.
>>> >
>>> > Heres the latest Cloudflare test:
>>> >
>>> >
>>> >
>>> >
>>> > And an Ookla test run just afterward:
>>> >
>>> >
>>> >
>>> >
>>> > They are definitely both in the ballpark and correspond to other tests
>>> > run from the router itself or my (wired) MacBook Pro.
>>> >
>>> > Cheers,
>>> >
>>> > Jonathan
>>> >
>>> >
>>> >> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>>> >> <rpm@lists.bufferbloat.net> wrote:
>>> >>
>>> >> Please try the new, the shiny, the really wonderful test here:
>>> >> https://speed.cloudflare.com/
>>> >>
>>> >> I would really appreciate some independent verification of
>>> >> measurements using this tool. In my brief experiments it appears - as
>>> >> all the commercial tools to date - to dramatically understate the
>>> >> bufferbloat, on my LTE, (and my starlink terminal is out being
>>> >> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>> >>
>>> >> My test of their test reports 223ms 5G latency under load , where
>>> >> flent reports over 2seconds. See comparison attached.
>>> >>
>>> >> My guess is that this otherwise lovely new tool, like too many,
>>> >> doesn't run for long enough. Admittedly, most web objects (their
>>> >> target market) are small, and so long as they remain small and not
>>> >> heavily pipelined this test is a very good start... but I'm pretty
>>> >> sure cloudflare is used for bigger uploads and downloads than that.
>>> >> There's no way to change the test to run longer either.
>>> >>
>>> >> I'd love to get some results from other networks (compared as usual
to
>>> >> flent), especially ones with cake on it. I'd love to know if they
>>> >> measured more minimum rtts that can be obtained with fq_codel or
cake,
>>> >> correctly.
>>> >>
>>> >> Love Always,
>>> >> The Grinch
>>> >>
>>> >> --
>>> >> This song goes out to all the folk that thought Stadia would work:
>>> >>
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698136666560
7352320-FXtz
>>> >> Dave Täht CEO, TekLibre, LLC
>>> >>
<image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>__________________
_____________________________
>>> >> Rpm mailing list
>>> >> Rpm@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/rpm
>>> >
>>> >
>>> > _______________________________________________
>>> > Rpm mailing list
>>> > Rpm@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/rpm
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> --
>> Bruce Perens K6BP
>>
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> --
> David Collier-Brown, | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> dave.collier-brown@indexexchange.com | -- Mark Twain
>
> CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including
any and all attachments, contains confidential information intended only for
the person(s) to whom it is addressed. Any dissemination, distribution,
copying or disclosure is strictly prohibited and is not a waiver of
confidentiality. If you have received this telecommunication in error,
please notify the sender immediately by return electronic mail and delete
the message from your inbox and deleted items folders. This
telecommunication does not constitute an express or implied agreement to
conduct transactions by electronic means, nor does it constitute a contract
offer, a contract amendment or an acceptance of a contract offer. Contract
terms contained in this telecommunication are subject to legal review and
the completion of formal documentation and are not binding until same is
confirmed in writing and has been signed by an authorized signatory.
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 38418 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-06 0:01 ` Dick Roy
@ 2023-01-06 9:43 ` Sebastian Moeller
0 siblings, 0 replies; 49+ messages in thread
From: Sebastian Moeller @ 2023-01-06 9:43 UTC (permalink / raw)
To: Dick Roy; +Cc: Dave Collier-Brown, starlink
Hi RR,
> On Jan 6, 2023, at 01:01, Dick Roy <dickroy@alum.mit.edu> wrote:
>
> Hi Sebastian,
>
> See below …
>
> -----Original Message-----
> From: Sebastian Moeller [mailto:moeller0@gmx.de]
> Sent: Thursday, January 5, 2023 3:26 AM
> To: Dick Roy
> Cc: Dave Collier-Brown; starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
>
> Hi RR,
>
>
> > On Jan 5, 2023, at 04:11, Dick Roy via Starlink <starlink@lists.bufferbloat.net> wrote:
> >
> >
> >
> > From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of Dave Collier-Brown via Starlink
> > Sent: Wednesday, January 4, 2023 6:48 PM
> > To: starlink@lists.bufferbloat.net
> > Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
> >
> > I think using "speed" for "the inverse of delay" is pretty normal English, if technically erroneous when speaking nerd or physicist.
> >
> > [RR] I’ve not heard of that usage before. The units aren’t commensurate either.
> >
> > Using it for volume? Arguably more like fraudulent...
> >
> > [RR] I don’t think that was Bob’s intent. I think “load volume” was meant to be a metaphor for “number of bits/bytes” being transported (“by the semi”).
> >
> > That said, aren’t users these days educated on “gigs” which they intuitively understand to be Gigabits per second (or Gbps)? Oddly enough, that is an expression of “data/information/communication rate” in the appropriate units with the nominal technically correct meaning.
>
> [SM] Gigs would have the following confounds if used without a proper definition:
> a) base10 or base2^10?
> b) giga-what? Bit or Byte
> c) Volume or capacity
> d) if capacity, minimal, average, or maximal?
>
> I note (again, sorry to sound like a broken record) that the national regulatory agency for networks (Bundes-Netzagentur, short BNetzA) in Germany has some detailed instructions about what information ISPs need to supply to their potential customers pre-sale (seehttps://www.bundesnetzagentur.de/SharedDocs/Downloads/DE/Sachgebiete/Telekommunikation/Unternehmen_Institutionen/Anbieterpflichten/Kundenschutz/Transparenzmaßnahmen/Instruction_for_drawing_up_PIS.pdf?__blob=publicationFile&v=1) where the headlines talk correctly about "data transmission rates" but in the text they occasionally fall back to "speed". They also state: "Data transmission rates must be given in megabits per second (Mbit/s)."
> This is both in response to our "speed" discussion, but also one potential way to clarify b) c) and d) above... given that is official this probably also answers a) (base10 otherwise the text would be "Data transmission rates must be given in mebibits per second (Mibit/s).")
> [RR] My reference to “gigs” was to the ads out nowadays from AT&T about becoming Gagillionaires (“Yes, I am Jurgous. … We know!”) that “now have gig speed wireless from AT&T” so they can play all kinds of VR games.
[SM2] Ah, not being in the U.S. that campaign has completely evaded my attention.
> J That said, not sure why BNetzA mandates a particular unit for information rates, but that’s their prerogative I guess.
[SM2] My bigger point was actually that they use the term "data transmission rates" instead of speed... But to your point the product information sheets are designed specifically to make it easy for consumers to compare different offers (for all values of consumer knowledge of networking and SI-prefixes) and requiring a single unit makes comparison simple and gives less leeway to play games. This BNetzA initiative basically removed the previous "up to XX Mbps" promise of ISPs and especially the interpretation that YY with YY << XX is still the contractually fine as "up to" implies not guarantee.
> Given that the fundamental unit of information is the answer to a YES/NO question (aka a “bit”), it makes sense to measure information in bits (although trits or any other higher order concept could be used as long as the system accounted for fractions thereofJ) (and sets of bits (aka bytes or really octets) because of ancient computer arcitecturesJ).
[SM2] I agree, but prefer this to be inherent in the term used, and "gigs" did not carry any of that information for me, but again I had not encountered the AT&T campaign...
> Since we have pretty much settled on the SI second as the accepted unit of time (and multiples thereof e.g. msec, usec, nsec, etc.), it makes sense to measure information flow in bits/sec or some multiples thereof such as Gbps, Mbps, Kbps, etc. and their byte (really octet) versions GBps, MBps, KBps, etc.. Not sure why BNetzA mandates ONLY one of these, but whatever … J
[SM2] Again I speculate that this is to allow easy comparison even by consumers that are not "fluent" in interpreting SI-prefixes and that might e.g. not know by heart whether Mb is larger or smaller than Gb, let alone why mB is not equal to Mb... this is a transparency measure foremost aimed at all end-customers (business contract do not fall under that regulation as far as I know).
> As for capacity, remember capacity is not something that is measured. It is a fundamental property (an information rate!) of a communication channel which has no other attributes such as minimal, average, or maximal (unless one is talking about time-varying channels and is wanting to characterize the capacity of the channel over time, but that’s another story).
[SM] On a shared medium (and let's face it head on sooner or later "internet-access" will become a shared medium) the capacity share each user can relay on is rarely static, most often ISPs oversubscribe links and use traffic shapers to limit the maximum capacity share a user can use, but that share is not guaranteed to be available. BNetzA went as far as defining three different data rates with different expected (temporal) availability as well as a (rather complicated) method how end-users can confirm whether ISPs actually deliver the promised rates.
> As such, comparing volume and capacity is comparing apples and oranges; one is a size of something (e.g. number of megabytes) and the other is a rate (e.g. MBps) so I am not sure what “Volume or capacity” really means.
[SM] The term Gigs unlike e.g. Mb/s or Mbps (for Megabits per second) does not intuitively define whether time is taken into account, and over here mobile contracts typically come with a "high-speed Volume" after the consumption of which mobile links fall back to truly atrocious rates like 32Kbps, for these kind of contracts ISPs tend to primarily market the Volume (mostly in GB) and only mention the maximal data rates as secondary information.
> I suspect the concept you may be looking for is “achievable rate” rather than “capacity”.
[SM2] Actually I always try to first calculate the theoretical upper limit of my links (which I then dub the link's "capacity") and I compare the actual measured rates with the theoretical limit... This works well enough for me, since the consumer links I use typically are dominated by a predictable traffic shaper on the ISP side...
> Achievable rate IS something that is measureable, and varies with load when channels are shared, etc.. Loosely speaking, achievable rate is always less than or equal to the capacity of a channel.
[SM2] Yepp, that is why I try to calculate a capacity estimate (and calculate the resulting throughput on different levels).
Regards
Sebastian
>
> HNY,
>
> RR
>
> --Sebastian
>
> >
> > RR
> >
> > --dave
> >
> > On 1/4/23 18:54, Bruce Perens via Starlink wrote:
> >> On the other hand, we would like to be comprehensible to normal users, especially when we want them to press their providers to deal with bufferbloat. Differences like speed and rate would go right over their heads.
> >>
> >> On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink <starlink@lists.bufferbloat.net> wrote:
> >>> The use of the term "speed" in communications used to be restricted to the speed of light (or whatever propagation speed one happened to be dealing with. Everything else was a "rate". Maybe I'm old-fashioned but I think talking about "speed tests" muddies the waters rather a lot.
> >>>
> >>> --
> >>> ****************************************************************
> >>> Dr. Ulrich Speidel
> >>>
> >>> Department of Computer Science
> >>>
> >>> Room 303S.594
> >>> Ph: (+64-9)-373-7599 ext. 85282
> >>>
> >>> The University of Auckland
> >>> u.speidel@auckland.ac.nz
> >>> http://www.cs.auckland.ac.nz/~ulrich/
> >>> ****************************************************************
> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of rjmcmahon via Starlink <starlink@lists.bufferbloat.net>
> >>> Sent: Thursday, January 5, 2023 9:02 AM
> >>> To: jf@jonathanfoulkes.com <jf@jonathanfoulkes.com>
> >>> Cc: Cake List <cake@lists.bufferbloat.net>; IETF IPPM WG <ippm@ietf.org>; libreqos <libreqos@lists.bufferbloat.net>; Dave Taht via Starlink <starlink@lists.bufferbloat.net>; Rpm <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>
> >>> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
> >>>
> >>> Curious to why people keep calling capacity tests speed tests? A semi at
> >>> 55 mph isn't faster than a porsche at 141 mph because its load volume is
> >>> larger.
> >>>
> >>> Bob
> >>> > HNY Dave and all the rest,
> >>> >
> >>> > Great to see yet another capacity test add latency metrics to the
> >>> > results. This one looks like a good start.
> >>> >
> >>> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
> >>> > is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
> >>> > (an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
> >>> > reliable low-latency unless you shave a good bit off the targets. My
> >>> > local loop is pretty congested.
> >>> >
> >>> > Here’s the latest Cloudflare test:
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > And an Ookla test run just afterward:
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > They are definitely both in the ballpark and correspond to other tests
> >>> > run from the router itself or my (wired) MacBook Pro.
> >>> >
> >>> > Cheers,
> >>> >
> >>> > Jonathan
> >>> >
> >>> >
> >>> >> On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
> >>> >> <rpm@lists.bufferbloat.net> wrote:
> >>> >>
> >>> >> Please try the new, the shiny, the really wonderful test here:
> >>> >> https://speed.cloudflare.com/
> >>> >>
> >>> >> I would really appreciate some independent verification of
> >>> >> measurements using this tool. In my brief experiments it appears - as
> >>> >> all the commercial tools to date - to dramatically understate the
> >>> >> bufferbloat, on my LTE, (and my starlink terminal is out being
> >>> >> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> >>> >>
> >>> >> My test of their test reports 223ms 5G latency under load , where
> >>> >> flent reports over 2seconds. See comparison attached.
> >>> >>
> >>> >> My guess is that this otherwise lovely new tool, like too many,
> >>> >> doesn't run for long enough. Admittedly, most web objects (their
> >>> >> target market) are small, and so long as they remain small and not
> >>> >> heavily pipelined this test is a very good start... but I'm pretty
> >>> >> sure cloudflare is used for bigger uploads and downloads than that.
> >>> >> There's no way to change the test to run longer either.
> >>> >>
> >>> >> I'd love to get some results from other networks (compared as usual to
> >>> >> flent), especially ones with cake on it. I'd love to know if they
> >>> >> measured more minimum rtts that can be obtained with fq_codel or cake,
> >>> >> correctly.
> >>> >>
> >>> >> Love Always,
> >>> >> The Grinch
> >>> >>
> >>> >> --
> >>> >> This song goes out to all the folk that thought Stadia would work:
> >>> >> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> >>> >> Dave Täht CEO, TekLibre, LLC
> >>> >> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> >>> >> Rpm mailing list
> >>> >> Rpm@lists.bufferbloat.net
> >>> >> https://lists.bufferbloat.net/listinfo/rpm
> >>> >
> >>> >
> >>> > _______________________________________________
> >>> > Rpm mailing list
> >>> > Rpm@lists.bufferbloat.net
> >>> > https://lists.bufferbloat.net/listinfo/rpm
> >>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/starlink
> >>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >>
> >> --
> >> Bruce Perens K6BP
> >>
> >>
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> > --
> > David Collier-Brown, | Always do right. This will gratify
> > System Programmer and Author | some people and astonish the rest
> > dave.collier-brown@indexexchange.com | -- Mark Twain
> >
> > CONFIDENTIALITY NOTICE AND DISCLAIMER : This telecommunication, including any and all attachments, contains confidential information intended only for the person(s) to whom it is addressed. Any dissemination, distribution, copying or disclosure is strictly prohibited and is not a waiver of confidentiality. If you have received this telecommunication in error, please notify the sender immediately by return electronic mail and delete the message from your inbox and deleted items folders. This telecommunication does not constitute an express or implied agreement to conduct transactions by electronic means, nor does it constitute a contract offer, a contract amendment or an acceptance of a contract offer. Contract terms contained in this telecommunication are subject to legal review and the completion of formal documentation and are not binding until same is confirmed in writing and has been signed by an authorized signatory.
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [LibreQoS] the grinch meets cloudflare's christmas present
2023-01-04 17:26 [Starlink] the grinch meets cloudflare's christmas present Dave Taht
2023-01-04 19:20 ` [Starlink] [Rpm] " jf
@ 2023-01-06 16:38 ` MORTON JR., AL
2023-01-06 20:38 ` [Starlink] [Rpm] " rjmcmahon
1 sibling, 1 reply; 49+ messages in thread
From: MORTON JR., AL @ 2023-01-06 16:38 UTC (permalink / raw)
To: Dave Taht, bloat, libreqos, Cake List, Dave Taht via Starlink,
Rpm, IETF IPPM WG
[-- Attachment #1.1: Type: text/plain, Size: 2347 bytes --]
> -----Original Message-----
> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of Dave Taht
> via LibreQoS
> Sent: Wednesday, January 4, 2023 12:26 PM
> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
> Please try the new, the shiny, the really wonderful test here:
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S<https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$>
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$<https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$>
>
> I would really appreciate some independent verification of
> measurements using this tool. In my brief experiments it appears - as
> all the commercial tools to date - to dramatically understate the
> bufferbloat, on my LTE, (and my starlink terminal is out being
> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
[acm]
Hi Dave, I made some time to test "cloudflare's christmas present" yesterday.
I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo" mode with 40-50Mbps for the first ~3 sec, then steady-state about 23Mbps.
When I saw the ~620Mbps Downstream measurement, I was ready to complain that even the IP-Layer Capacity was grossly underestimated. In addition, the Latency measurements seem very low (as you asserted), although the cloud server was “nearby”.
However, I found that Ookla and the ISP-provided measurement were also reporting ~600Mbps! So the cloudflare Downstream capacity (or throughput?) measurement was consistent with others. Our UDPST server was unreachable, otherwise I would have added that measurement, too.
The Upstream measurement graph seems to illustrate the “turbo” mode, with the dip after attaining 44.5Mbps.
UDPST saturates the uplink and we measure the full 250ms of the Upstream buffer. Cloudflare’s latency measurements don’t even come close.
Al
[Screen Shot 2023-01-05 at 5.54.26 PM.png][Screen Shot 2023-01-05 at 5.54.53 PM.png][Screen Shot 2023-01-05 at 5.55.39 PM.png]
[-- Attachment #1.2: Type: text/html, Size: 42872 bytes --]
[-- Attachment #2: image001.png --]
[-- Type: image/png, Size: 176230 bytes --]
[-- Attachment #3: image002.png --]
[-- Type: image/png, Size: 461849 bytes --]
[-- Attachment #4: image003.png --]
[-- Type: image/png, Size: 225012 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare's christmas present
2023-01-06 16:38 ` [Starlink] [LibreQoS] " MORTON JR., AL
@ 2023-01-06 20:38 ` rjmcmahon
2023-01-06 20:47 ` rjmcmahon
[not found] ` <89D796E75967416B9723211C183A8396@SRA6>
0 siblings, 2 replies; 49+ messages in thread
From: rjmcmahon @ 2023-01-06 20:38 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, bloat, libreqos, Cake List, Dave Taht via Starlink,
Rpm, IETF IPPM WG
Some thoughts are not to use UDP for testing here. Also, these speed
tests have little to no information for network engineers about what's
going on. Iperf 2 may better assist network engineers but then I'm
biased ;)
Running iperf 2 https://sourceforge.net/projects/iperf2/ with
--trip-times. Though the sampling and central limit theorem averaging is
hiding the real distributions (use --histograms to get those)
Below are 4 parallel TCP streams from my home to one of my servers in
the cloud. First where TCP is limited per CCA. Second is source side
write rate limiting. Things to note:
o) connect times for both at 10-15 ms
o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
Source side pacing eliminates retries
o) Fairness with CCA isn't great but quite good with source side write
pacing
o) Queue depth with CCA is about 150 Kbytes about 100K byte with source
side pacing
o) min write to read is about 80 ms for both
o) max is 220 ms vs 97 ms
o) stdev for CCA write/read is 30 ms vs 3 ms
o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
TCP_QUICACK and TCP_NODELAY are both enabled.
[ CT] final connect times (min/avg/max/stdev) =
10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
flows)
Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
(PST)
[ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
(PST)
[ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
(PST)
[ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=4) (qack)
(icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
...
[ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
29K/21088(1142) us 68.37
[ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
31K/19284(612) us 88.36
[ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
16K/18996(658) us 48.30
[ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
18K/18133(208) us 57.83
[SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
[ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
29K/14717(489) us 89.06
[ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
16K/15874(408) us 66.06
[ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
16K/15826(382) us 74.54
[ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
9K/14878(557) us 106
[SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
[ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
25K/15472(496) us 119
[ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
26K/16417(427) us 63.87
[ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
16K/16268(679) us 80.57
[ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
15K/16629(799) us 63.06
[SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
[ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
22K/13986(519) us 131
[ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
16K/12679(377) us 93.04
[ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
14K/12971(367) us 70.74
[ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
15K/14740(779) us 80.03
[SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
------------------------------------------------------------
Server listening on TCP port 5001 with pid 233615
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
------------------------------------------------------------
[ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42480
(trip-times) (sock=4) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
[ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42502
(trip-times) (sock=5) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
[ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42484
(trip-times) (sock=6) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
[ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42488
(trip-times) (sock=7) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
...
[ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
467=461:6:0:0:0:0:0:0
[ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
646=643:1:2:0:0:0:0:0
[ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
619=617:1:1:0:0:0:0:0
[ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
409=404:5:0:0:0:0:0:0
[SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
2141=2125:13:3:0:0:0:0:0
[ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
511=506:2:3:0:0:0:0:0
[ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
487=484:2:1:0:0:0:0:0
[ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
640=636:1:3:0:0:0:0:0
[ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
415=413:2:0:0:0:0:0:0
[SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
2053=2039:7:7:0:0:0:0:0
[ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
711=707:3:1:0:0:0:0:0
[ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
494=488:5:1:0:0:0:0:0
[ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
408=403:5:0:0:0:0:0:0
[ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
484=480:4:0:0:0:0:0:0
[SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
2097=2078:17:2:0:0:0:0:0
[ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
692=687:2:3:0:0:0:0:0
[ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
429=425:4:0:0:0:0:0:0
[ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
520=516:4:0:0:0:0:0:0
[ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
527=524:1:2:0:0:0:0:0
[SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
2168=2152:11:5:0:0:0:0:0
With source side rate limiting to 9 mb/s per stream.
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
flows)
Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
(PST)
[ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
(PST)
[ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=4) (qack)
(icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
(PST)
[ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
...
[ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/88055(12329) us 11.91
[ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
132K/74867(11755) us 14.01
[ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/89101(13134) us 11.77
[ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
131K/91451(11938) us 11.47
[SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
[ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/85135(14580) us 13.86
[ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
132K/85124(15654) us 13.86
[ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/91336(11335) us 12.92
[ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
131K/89185(13499) us 13.23
[SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
[ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/85687(13489) us 13.77
[ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
132K/82803(13001) us 14.25
[ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
134K/86869(15186) us 13.58
[ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
131K/91447(12515) us 12.90
[SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
[ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/81814(13168) us 12.82
[ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
132K/89008(13283) us 11.78
[ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
134K/89494(12151) us 11.72
[ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
131K/91083(12797) us 11.51
[SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
------------------------------------------------------------
Server listening on TCP port 5001 with pid 233981
Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
------------------------------------------------------------
[ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46448
(trip-times) (sock=4) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
[ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46454
(trip-times) (sock=5) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
[ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46460
(trip-times) (sock=6) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
[ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 46458
(trip-times) (sock=7) (peer 2.1.9-master) (qack)
(icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
451=451:0:0:0:0:0:0:0
[ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
472=472:0:0:0:0:0:0:0
[ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
495=495:0:0:0:0:0:0:0
[ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
489=489:0:0:0:0:0:0:0
[ ID] Interval Transfer Bandwidth Reads=Dist
[SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
1907=1907:0:0:0:0:0:0:0
[ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
492=492:0:0:0:0:0:0:0
[ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
480=480:0:0:0:0:0:0:0
[ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
513=513:0:0:0:0:0:0:0
[ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
489=489:0:0:0:0:0:0:0
[SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
1974=1974:0:0:0:0:0:0:0
[ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
497=497:0:0:0:0:0:0:0
[ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
503=503:0:0:0:0:0:0:0
[ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
509=509:0:0:0:0:0:0:0
[ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
509=509:0:0:0:0:0:0:0
[SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
2018=2018:0:0:0:0:0:0:0
[ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
487=487:0:0:0:0:0:0:0
[ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
489=489:0:0:0:0:0:0:0
[ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
493=493:0:0:0:0:0:0:0
[ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
493=493:0:0:0:0:0:0:0
[SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
1962=1962:0:0:0:0:0:0:0
[ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
498=498:0:0:0:0:0:0:0
[ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
490=490:0:0:0:0:0:0:0
[ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
481=481:0:0:0:0:0:0:0
[ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
484=484:0:0:0:0:0:0:0
[SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
1953=1953:0:0:0:0:0:0:0
[ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
467=467:0:0:0:0:0:0:0
[ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
478=478:0:0:0:0:0:0:0
[ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
491=491:0:0:0:0:0:0:0
[ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
488=488:0:0:0:0:0:0:0
[SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
1924=1924:0:0:0:0:0:0:0
[ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
485=485:0:0:0:0:0:0:0
[ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
472=472:0:0:0:0:0:0:0
[ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
477=477:0:0:0:0:0:0:0
[ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
494=494:0:0:0:0:0:0:0
[SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
1928=1928:0:0:0:0:0:0:0
[ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
510=510:0:0:0:0:0:0:0
[ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
480=480:0:0:0:0:0:0:0
[ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
515=515:0:0:0:0:0:0:0
[ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
508=508:0:0:0:0:0:0:0
[SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
2013=2013:0:0:0:0:0:0:0
Bob
>> -----Original Message-----
>
>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of
> Dave Taht
>
>> via LibreQoS
>
>> Sent: Wednesday, January 4, 2023 12:26 PM
>
>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>
>>
>
>> Please try the new, the shiny, the really wonderful test here:
>
>>
> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
> [1]
>
>>
> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
> [1]
>
>>
>
>> I would really appreciate some independent verification of
>
>> measurements using this tool. In my brief experiments it appears -
> as
>
>> all the commercial tools to date - to dramatically understate the
>
>> bufferbloat, on my LTE, (and my starlink terminal is out being
>
>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>
> [acm]
>
> Hi Dave, I made some time to test "cloudflare's christmas present"
> yesterday.
>
> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo"
> mode with 40-50Mbps for the first ~3 sec, then steady-state about
> 23Mbps.
>
> When I saw the ~620Mbps Downstream measurement, I was ready to
> complain that even the IP-Layer Capacity was grossly underestimated.
> In addition, the Latency measurements seem very low (as you asserted),
> although the cloud server was “nearby”.
>
> However, I found that Ookla and the ISP-provided measurement were also
> reporting ~600Mbps! So the cloudflare Downstream capacity (or
> throughput?) measurement was consistent with others. Our UDPST server
> was unreachable, otherwise I would have added that measurement, too.
>
> The Upstream measurement graph seems to illustrate the “turbo”
> mode, with the dip after attaining 44.5Mbps.
>
> UDPST saturates the uplink and we measure the full 250ms of the
> Upstream buffer. Cloudflare’s latency measurements don’t even come
> close.
>
> Al
>
>
>
> Links:
> ------
> [1]
> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
> _______________________________________________
> Rpm mailing list
> Rpm@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare's christmas present
2023-01-06 20:38 ` [Starlink] [Rpm] " rjmcmahon
@ 2023-01-06 20:47 ` rjmcmahon
[not found] ` <89D796E75967416B9723211C183A8396@SRA6>
1 sibling, 0 replies; 49+ messages in thread
From: rjmcmahon @ 2023-01-06 20:47 UTC (permalink / raw)
To: MORTON JR., AL
Cc: Dave Taht, bloat, libreqos, Cake List, Dave Taht via Starlink,
Rpm, IETF IPPM WG
For responsiveness, the bounceback seems reasonable even with upstream
competition. Bunch more TCP retries though.
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 --bounceback -t 3
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 111022 (1
flows)
Write buffer size: 100 Byte
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs &
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local *.*.*.86%enp7s0 port 36976 connected with *.*.*.123 port
5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times)
(sock=3) (icwnd/mss/irtt=14/1448/9862) (ct=9.90 ms) on 2023-01-06
12:42:18 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=12.195/9.298/16.457/2.679 ms 0 14K/11327 us 82 rps
[ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=12.613/9.271/15.489/2.788 ms 0 14K/12165 us 79 rps
[ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=13.390/9.376/15.986/2.520 ms 0 14K/13164 us 75 rps
[ 1] 0.00-3.03 sec 5.86 KBytes 15.8 Kbits/sec
30=12.733/9.271/16.457/2.620 ms 0 14K/15138 us 79 rps
[ 1] 0.00-3.03 sec OWD Delays (ms) Cnt=30 To=7.937/4.634/11.327/2.457
From=4.778/4.401/5.350/0.258 Asymmetry=3.166/0.097/6.311/2.318 79 rps
[ 1] 0.00-3.03 sec BB8(f)-PDF:
bin(w=100us):cnt(30)=93:2,94:3,95:2,97:1,100:1,102:1,105:1,114:2,142:1,143:1,144:2,145:3,146:1,147:1,148:1,151:1,152:1,154:1,155:1,156:1,160:1,165:1
(5.00/95.00/99.7%=93/160/165,Outliers=0,obl/obu=0/0)
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
--trip-times -i 1 --bounceback -t 3 --bounceback-congest=up,4
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001 with pid 111069 (1
flows)
Write buffer size: 100 Byte
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs &
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 2] local *.*.*.85%enp4s0 port 38342 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=3) (qack)
(icwnd/mss/irtt=14/1448/10613) (ct=10.66 ms) on 2023-01-06 12:42:36
(PST)
[ 1] local *.*.*.85%enp4s0 port 38360 connected with *.*.*.123 port
5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times)
(sock=4) (icwnd/mss/irtt=14/1448/14901) (ct=14.96 ms) on 2023-01-06
12:42:36 (PST)
[ 3] local *.*.*.85%enp4s0 port 38386 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=7) (qack)
(icwnd/mss/irtt=14/1448/15295) (ct=15.31 ms) on 2023-01-06 12:42:36
(PST)
[ 4] local *.*.*.85%enp4s0 port 38348 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=5) (qack)
(icwnd/mss/irtt=14/1448/14901) (ct=14.95 ms) on 2023-01-06 12:42:36
(PST)
[ 5] local *.*.*.85%enp4s0 port 38372 connected with *.*.*.123 port
5001 (prefetch=16384) (trip-times) (sock=6) (qack)
(icwnd/mss/irtt=14/1448/15371) (ct=15.42 ms) on 2023-01-06 12:42:36
(PST)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
[ 3] 0.00-1.00 sec 1.29 MBytes 10.8 Mbits/sec 13502/0 115
28K/22594(904) us 59.76
[ 4] 0.00-1.00 sec 1.63 MBytes 13.6 Mbits/sec 17048/0 140
42K/22728(568) us 75.01
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=76.140/17.224/123.195/43.168 ms 0 14K/68136 us 13 rps
[ 5] 0.00-1.00 sec 1.04 MBytes 8.72 Mbits/sec 10893/0 82
25K/23400(644) us 46.55
[SUM] 0.00-1.00 sec 3.95 MBytes 33.2 Mbits/sec 41443/0 337
[ 2] 0.00-1.00 sec 1.10 MBytes 9.25 Mbits/sec 11566/0 77
22K/23557(432) us 49.10
[ 3] 1.00-2.00 sec 1.24 MBytes 10.4 Mbits/sec 13037/0 20
28K/14427(503) us 90.37
[ 4] 1.00-2.00 sec 1.43 MBytes 12.0 Mbits/sec 14954/0 31
12K/13348(407) us 112
[ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=14.581/10.801/20.356/3.599 ms 0 14K/27791 us 69 rps
[ 5] 1.00-2.00 sec 1.26 MBytes 10.6 Mbits/sec 13191/0 16
12K/14749(675) us 89.44
[SUM] 1.00-2.00 sec 3.93 MBytes 32.9 Mbits/sec 41182/0 67
[ 2] 1.00-2.00 sec 1000 KBytes 8.19 Mbits/sec 10237/0 13
19K/14467(1068) us 70.76
[ 3] 2.00-3.00 sec 1.33 MBytes 11.2 Mbits/sec 13994/0 4
24K/20749(495) us 67.44
[ 4] 2.00-3.00 sec 1.20 MBytes 10.1 Mbits/sec 12615/0 3
31K/20877(718) us 60.43
[ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=11.298/9.407/14.245/1.330 ms 0 14K/15474 us 89 rps
[ 5] 2.00-3.00 sec 1.08 MBytes 9.03 Mbits/sec 11284/0 3
28K/21031(430) us 53.65
[SUM] 2.00-3.00 sec 3.61 MBytes 30.3 Mbits/sec 37893/0 10
[ 2] 2.00-3.00 sec 1.29 MBytes 10.8 Mbits/sec 13492/0 3
29K/20409(688) us 66.11
[ 3] 0.00-3.03 sec 3.87 MBytes 10.7 Mbits/sec 40534/0 139
25K/20645(557) us 64.85
[ 5] 0.00-3.03 sec 3.37 MBytes 9.35 Mbits/sec 35369/0 101
29K/20489(668) us 57.02
[ 4] 0.00-3.03 sec 4.26 MBytes 11.8 Mbits/sec 44618/0 174
32K/21240(961) us 69.40
[ 2] 0.00-3.03 sec 3.37 MBytes 9.31 Mbits/sec 35296/0 94
19K/21504(948) us 54.13
[ 1] 0.00-3.14 sec 7.81 KBytes 20.4 Kbits/sec
40=28.332/5.611/123.195/34.940 ms 0 14K/14000 us 35 rps
[ 1] 0.00-3.14 sec OWD Delays (ms) Cnt=40
To=23.730/1.110/118.744/34.957 From=4.567/4.356/5.171/0.141
Asymmetry=19.332/0.189/114.294/34.869 35 rps
[ 1] 0.00-3.14 sec BB8(f)-PDF:
bin(w=100us):cnt(40)=57:1,94:2,95:2,96:2,98:1,101:1,106:1,109:2,111:1,112:2,113:1,115:1,118:1,119:2,143:2,145:1,146:1,152:1,158:1,173:1,176:1,194:1,195:1,204:1,205:1,274:1,554:1,790:1,925:1,1125:1,1126:1,1225:1,1232:1
(5.00/95.00/99.7%=94/1225/1232,Outliers=0,obl/obu=0/0)
[SUM] 0.00-3.11 sec 11.5 MBytes 31.0 Mbits/sec 120521/0 414
[ CT] final connect times (min/avg/max/stdev) =
10.661/14.261/15.423/2023.369 ms (tot/err) = 5/0
> Some thoughts are not to use UDP for testing here. Also, these speed
> tests have little to no information for network engineers about what's
> going on. Iperf 2 may better assist network engineers but then I'm
> biased ;)
>
> Running iperf 2 https://sourceforge.net/projects/iperf2/ with
> --trip-times. Though the sampling and central limit theorem averaging
> is hiding the real distributions (use --histograms to get those)
>
> Below are 4 parallel TCP streams from my home to one of my servers in
> the cloud. First where TCP is limited per CCA. Second is source side
> write rate limiting. Things to note:
>
> o) connect times for both at 10-15 ms
> o) multiple TCP retries on a few rites - one case is 4 with 5 writes.
> Source side pacing eliminates retries
> o) Fairness with CCA isn't great but quite good with source side write
> pacing
> o) Queue depth with CCA is about 150 Kbytes about 100K byte with
> source side pacing
> o) min write to read is about 80 ms for both
> o) max is 220 ms vs 97 ms
> o) stdev for CCA write/read is 30 ms vs 3 ms
> o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as
> TCP_QUICACK and TCP_NODELAY are both enabled.
>
> [ CT] final connect times (min/avg/max/stdev) =
> 10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N
> ------------------------------------------------------------
> Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4
> flows)
> Write buffer size: 131072 Byte
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [ 1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
> (icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56
> (PST)
> [ 4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
> (icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56
> (PST)
> [ 3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
> (icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56
> (PST)
> [ 2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
> (icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56
> (PST)
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
> Cwnd/RTT(var) NetPwr
> ...
> [ 4] 4.00-5.00 sec 1.38 MBytes 11.5 Mbits/sec 11/0 3
> 29K/21088(1142) us 68.37
> [ 2] 4.00-5.00 sec 1.62 MBytes 13.6 Mbits/sec 13/0 2
> 31K/19284(612) us 88.36
> [ 1] 4.00-5.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
> 16K/18996(658) us 48.30
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 5
> 18K/18133(208) us 57.83
> [SUM] 4.00-5.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 15
> [ 4] 5.00-6.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 4
> 29K/14717(489) us 89.06
> [ 1] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 4
> 16K/15874(408) us 66.06
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
> 16K/15826(382) us 74.54
> [ 2] 5.00-6.00 sec 1.50 MBytes 12.6 Mbits/sec 12/0 6
> 9K/14878(557) us 106
> [SUM] 5.00-6.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 18
> [ 4] 6.00-7.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
> 25K/15472(496) us 119
> [ 2] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 2
> 26K/16417(427) us 63.87
> [ 1] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 10/0 5
> 16K/16268(679) us 80.57
> [ 3] 6.00-7.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 6
> 15K/16629(799) us 63.06
> [SUM] 6.00-7.00 sec 5.00 MBytes 41.9 Mbits/sec 40/0 17
> [ 4] 7.00-8.00 sec 1.75 MBytes 14.7 Mbits/sec 14/0 4
> 22K/13986(519) us 131
> [ 1] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 4
> 16K/12679(377) us 93.04
> [ 3] 7.00-8.00 sec 896 KBytes 7.34 Mbits/sec 7/0 5
> 14K/12971(367) us 70.74
> [ 2] 7.00-8.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 6
> 15K/14740(779) us 80.03
> [SUM] 7.00-8.00 sec 4.88 MBytes 40.9 Mbits/sec 39/0 19
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 233615
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> ------------------------------------------------------------
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42480 (trip-times) (sock=4) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42502 (trip-times) (sock=5) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11898) on 2023-01-06 12:17:56 (PST)
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42484 (trip-times) (sock=6) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11938) on 2023-01-06 12:17:56 (PST)
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 42488 (trip-times) (sock=7) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11919) on 2023-01-06 12:17:56 (PST)
> [ ID] Interval Transfer Bandwidth Burst Latency
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
> ...
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.86 Mbits/sec
> 129.819/90.391/186.075/31.346 ms (9/123080) 154 KByte 8.532803
> 467=461:6:0:0:0:0:0:0
> [ 3] 4.00-5.00 sec 1.52 MBytes 12.8 Mbits/sec
> 103.552/82.653/169.274/28.382 ms (12/132854) 149 KByte 15.40
> 646=643:1:2:0:0:0:0:0
> [ 4] 4.00-5.00 sec 1.39 MBytes 11.6 Mbits/sec
> 107.503/66.843/143.038/24.269 ms (11/132294) 149 KByte 13.54
> 619=617:1:1:0:0:0:0:0
> [ 1] 4.00-5.00 sec 988 KBytes 8.10 Mbits/sec
> 141.389/119.961/178.785/18.812 ms (7/144593) 170 KByte 7.158641
> 409=404:5:0:0:0:0:0:0
> [SUM] 4.00-5.00 sec 4.93 MBytes 41.4 Mbits/sec
> 2141=2125:13:3:0:0:0:0:0
> [ 4] 5.00-6.00 sec 1.29 MBytes 10.8 Mbits/sec
> 118.943/86.253/176.128/31.248 ms (10/135098) 164 KByte 11.36
> 511=506:2:3:0:0:0:0:0
> [ 2] 5.00-6.00 sec 1.09 MBytes 9.17 Mbits/sec
> 139.821/102.418/218.875/40.422 ms (9/127424) 148 KByte 8.202049
> 487=484:2:1:0:0:0:0:0
> [ 3] 5.00-6.00 sec 1.51 MBytes 12.6 Mbits/sec
> 102.146/77.085/140.893/18.441 ms (13/121520) 151 KByte 15.47
> 640=636:1:3:0:0:0:0:0
> [ 1] 5.00-6.00 sec 981 KBytes 8.04 Mbits/sec
> 161.901/105.582/219.931/36.260 ms (8/125614) 134 KByte 6.206944
> 415=413:2:0:0:0:0:0:0
> [SUM] 5.00-6.00 sec 4.85 MBytes 40.7 Mbits/sec
> 2053=2039:7:7:0:0:0:0:0
> [ 4] 6.00-7.00 sec 1.74 MBytes 14.6 Mbits/sec
> 88.846/74.297/101.859/7.118 ms (14/130526) 156 KByte 20.57
> 711=707:3:1:0:0:0:0:0
> [ 1] 6.00-7.00 sec 1.22 MBytes 10.2 Mbits/sec
> 120.639/100.257/157.567/21.770 ms (10/127568) 157 KByte 10.57
> 494=488:5:1:0:0:0:0:0
> [ 2] 6.00-7.00 sec 1015 KBytes 8.32 Mbits/sec
> 144.632/124.368/171.349/16.597 ms (8/129958) 143 KByte 7.188321
> 408=403:5:0:0:0:0:0:0
> [ 3] 6.00-7.00 sec 1.02 MBytes 8.60 Mbits/sec
> 143.516/102.322/173.001/24.089 ms (8/134302) 146 KByte 7.486359
> 484=480:4:0:0:0:0:0:0
> [SUM] 6.00-7.00 sec 4.98 MBytes 41.7 Mbits/sec
> 2097=2078:17:2:0:0:0:0:0
> [ 4] 7.00-8.00 sec 1.77 MBytes 14.9 Mbits/sec
> 85.406/65.797/103.418/12.609 ms (14/132595) 153 KByte 21.74
> 692=687:2:3:0:0:0:0:0
> [ 2] 7.00-8.00 sec 957 KBytes 7.84 Mbits/sec
> 153.936/131.452/191.464/19.361 ms (7/140042) 160 KByte 6.368199
> 429=425:4:0:0:0:0:0:0
> [ 1] 7.00-8.00 sec 1.13 MBytes 9.44 Mbits/sec
> 131.146/109.737/166.774/22.035 ms (9/131124) 146 KByte 8.998528
> 520=516:4:0:0:0:0:0:0
> [ 3] 7.00-8.00 sec 1.13 MBytes 9.51 Mbits/sec
> 126.512/88.404/220.175/42.237 ms (9/132089) 172 KByte 9.396784
> 527=524:1:2:0:0:0:0:0
> [SUM] 7.00-8.00 sec 4.96 MBytes 41.6 Mbits/sec
> 2168=2152:11:5:0:0:0:0:0
>
> With source side rate limiting to 9 mb/s per stream.
>
> [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e
> --trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N -b 9m
> ------------------------------------------------------------
> Client connecting to (**hidden**), TCP port 5001 with pid 108884 (4
> flows)
> Write buffer size: 131072 Byte
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [ 1] local *.*.*.85%enp4s0 port 46448 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=3) (qack)
> (icwnd/mss/irtt=14/1448/10666) (ct=10.70 ms) on 2023-01-06 12:27:45
> (PST)
> [ 3] local *.*.*.85%enp4s0 port 46460 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=6) (qack)
> (icwnd/mss/irtt=14/1448/16499) (ct=16.54 ms) on 2023-01-06 12:27:45
> (PST)
> [ 2] local *.*.*.85%enp4s0 port 46454 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=4) (qack)
> (icwnd/mss/irtt=14/1448/16580) (ct=16.86 ms) on 2023-01-06 12:27:45
> (PST)
> [ 4] local *.*.*.85%enp4s0 port 46458 connected with *.*.*.123 port
> 5001 (prefetch=16384) (trip-times) (sock=5) (qack)
> (icwnd/mss/irtt=14/1448/16802) (ct=16.83 ms) on 2023-01-06 12:27:45
> (PST)
> [ ID] Interval Transfer Bandwidth Write/Err Rtry
> Cwnd/RTT(var) NetPwr
> ...
> [ 2] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/88055(12329) us 11.91
> [ 4] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 132K/74867(11755) us 14.01
> [ 1] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/89101(13134) us 11.77
> [ 3] 4.00-5.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 131K/91451(11938) us 11.47
> [SUM] 4.00-5.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
> [ 2] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/85135(14580) us 13.86
> [ 4] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 132K/85124(15654) us 13.86
> [ 1] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/91336(11335) us 12.92
> [ 3] 5.00-6.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 131K/89185(13499) us 13.23
> [SUM] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
> [ 2] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/85687(13489) us 13.77
> [ 4] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 132K/82803(13001) us 14.25
> [ 1] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 134K/86869(15186) us 13.58
> [ 3] 6.00-7.00 sec 1.12 MBytes 9.44 Mbits/sec 9/0 0
> 131K/91447(12515) us 12.90
> [SUM] 6.00-7.00 sec 4.50 MBytes 37.7 Mbits/sec 36/0 0
> [ 2] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/81814(13168) us 12.82
> [ 4] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 132K/89008(13283) us 11.78
> [ 1] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 134K/89494(12151) us 11.72
> [ 3] 7.00-8.00 sec 1.00 MBytes 8.39 Mbits/sec 8/0 0
> 131K/91083(12797) us 11.51
> [SUM] 7.00-8.00 sec 4.00 MBytes 33.6 Mbits/sec 32/0 0
>
> [root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m
> ------------------------------------------------------------
> Server listening on TCP port 5001 with pid 233981
> Read buffer size: 128 KByte (Dist bin width=16.0 KByte)
> TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
> ------------------------------------------------------------
> [ 1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46448 (trip-times) (sock=4) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11987) on 2023-01-06 12:27:45 (PST)
> [ 2] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46454 (trip-times) (sock=5) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11132) on 2023-01-06 12:27:45 (PST)
> [ 3] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46460 (trip-times) (sock=6) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/11097) on 2023-01-06 12:27:45 (PST)
> [ 4] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port
> 46458 (trip-times) (sock=7) (peer 2.1.9-master) (qack)
> (icwnd/mss/irtt=14/1448/17823) on 2023-01-06 12:27:45 (PST)
> [ ID] Interval Transfer Bandwidth Burst Latency
> avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
> [ 4] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
> 93.383/90.103/95.661/2.232 ms (8/143028) 128 KByte 12.25
> 451=451:0:0:0:0:0:0:0
> [ 3] 0.00-1.00 sec 1.08 MBytes 9.06 Mbits/sec
> 96.834/95.229/102.645/2.442 ms (8/141580) 131 KByte 11.70
> 472=472:0:0:0:0:0:0:0
> [ 1] 0.00-1.00 sec 1.10 MBytes 9.19 Mbits/sec
> 95.183/92.623/97.579/1.431 ms (8/143571) 131 KByte 12.07
> 495=495:0:0:0:0:0:0:0
> [ 2] 0.00-1.00 sec 1.09 MBytes 9.15 Mbits/sec
> 89.317/84.865/94.906/3.674 ms (8/143028) 122 KByte 12.81
> 489=489:0:0:0:0:0:0:0
> [ ID] Interval Transfer Bandwidth Reads=Dist
> [SUM] 0.00-1.00 sec 4.36 MBytes 36.6 Mbits/sec
> 1907=1907:0:0:0:0:0:0:0
> [ 4] 1.00-2.00 sec 1.07 MBytes 8.95 Mbits/sec
> 92.649/89.987/95.036/1.828 ms (9/124314) 96.5 KByte 12.08
> 492=492:0:0:0:0:0:0:0
> [ 3] 1.00-2.00 sec 1.06 MBytes 8.93 Mbits/sec
> 96.305/95.647/96.794/0.432 ms (9/123992) 100 KByte 11.59
> 480=480:0:0:0:0:0:0:0
> [ 1] 1.00-2.00 sec 1.06 MBytes 8.89 Mbits/sec
> 92.578/90.866/94.145/1.371 ms (9/123510) 95.8 KByte 12.01
> 513=513:0:0:0:0:0:0:0
> [ 2] 1.00-2.00 sec 1.07 MBytes 8.96 Mbits/sec
> 90.767/87.984/94.352/1.944 ms (9/124475) 94.7 KByte 12.34
> 489=489:0:0:0:0:0:0:0
> [SUM] 1.00-2.00 sec 4.26 MBytes 35.7 Mbits/sec
> 1974=1974:0:0:0:0:0:0:0
> [ 4] 2.00-3.00 sec 1.09 MBytes 9.13 Mbits/sec
> 93.977/91.795/96.561/1.693 ms (8/142656) 112 KByte 12.14
> 497=497:0:0:0:0:0:0:0
> [ 3] 2.00-3.00 sec 1.08 MBytes 9.04 Mbits/sec
> 96.544/95.815/97.798/0.693 ms (8/141208) 114 KByte 11.70
> 503=503:0:0:0:0:0:0:0
> [ 1] 2.00-3.00 sec 1.07 MBytes 9.01 Mbits/sec
> 93.970/91.193/96.325/1.796 ms (8/140846) 111 KByte 11.99
> 509=509:0:0:0:0:0:0:0
> [ 2] 2.00-3.00 sec 1.08 MBytes 9.10 Mbits/sec
> 92.843/90.216/96.355/2.040 ms (8/142113) 111 KByte 12.25
> 509=509:0:0:0:0:0:0:0
> [SUM] 2.00-3.00 sec 4.32 MBytes 36.3 Mbits/sec
> 2018=2018:0:0:0:0:0:0:0
> [ 4] 3.00-4.00 sec 1.06 MBytes 8.86 Mbits/sec
> 93.222/89.063/96.104/2.346 ms (9/123027) 96.1 KByte 11.88
> 487=487:0:0:0:0:0:0:0
> [ 3] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
> 96.277/95.051/97.230/0.767 ms (9/124636) 101 KByte 11.65
> 489=489:0:0:0:0:0:0:0
> [ 1] 3.00-4.00 sec 1.08 MBytes 9.02 Mbits/sec
> 93.899/88.732/96.972/2.737 ms (9/125280) 98.6 KByte 12.01
> 493=493:0:0:0:0:0:0:0
> [ 2] 3.00-4.00 sec 1.07 MBytes 8.97 Mbits/sec
> 92.490/89.862/95.265/1.796 ms (9/124636) 96.6 KByte 12.13
> 493=493:0:0:0:0:0:0:0
> [SUM] 3.00-4.00 sec 4.27 MBytes 35.8 Mbits/sec
> 1962=1962:0:0:0:0:0:0:0
> [ 4] 4.00-5.00 sec 1.07 MBytes 9.00 Mbits/sec
> 92.431/81.888/96.221/4.524 ms (9/124958) 96.8 KByte 12.17
> 498=498:0:0:0:0:0:0:0
> [ 1] 4.00-5.00 sec 1.07 MBytes 8.97 Mbits/sec
> 95.018/93.445/96.200/0.957 ms (9/124636) 99.3 KByte 11.81
> 490=490:0:0:0:0:0:0:0
> [ 2] 4.00-5.00 sec 1.06 MBytes 8.90 Mbits/sec
> 93.874/86.485/95.672/2.810 ms (9/123671) 97.3 KByte 11.86
> 481=481:0:0:0:0:0:0:0
> [ 3] 4.00-5.00 sec 1.08 MBytes 9.09 Mbits/sec
> 95.737/93.881/97.197/0.972 ms (9/126245) 101 KByte 11.87
> 484=484:0:0:0:0:0:0:0
> [SUM] 4.00-5.00 sec 4.29 MBytes 36.0 Mbits/sec
> 1953=1953:0:0:0:0:0:0:0
> [ 4] 5.00-6.00 sec 1.09 MBytes 9.13 Mbits/sec
> 92.908/86.844/95.994/3.012 ms (8/142656) 111 KByte 12.28
> 467=467:0:0:0:0:0:0:0
> [ 3] 5.00-6.00 sec 1.07 MBytes 8.94 Mbits/sec
> 96.593/95.343/97.660/0.876 ms (8/139760) 113 KByte 11.58
> 478=478:0:0:0:0:0:0:0
> [ 1] 5.00-6.00 sec 1.08 MBytes 9.03 Mbits/sec
> 95.021/91.421/97.167/1.893 ms (8/141027) 112 KByte 11.87
> 491=491:0:0:0:0:0:0:0
> [ 2] 5.00-6.00 sec 1.08 MBytes 9.06 Mbits/sec
> 92.162/82.720/97.692/5.060 ms (8/141570) 109 KByte 12.29
> 488=488:0:0:0:0:0:0:0
> [SUM] 5.00-6.00 sec 4.31 MBytes 36.2 Mbits/sec
> 1924=1924:0:0:0:0:0:0:0
> [ 4] 6.00-7.00 sec 1.04 MBytes 8.70 Mbits/sec
> 92.793/85.343/96.967/3.552 ms (9/120775) 93.9 KByte 11.71
> 485=485:0:0:0:0:0:0:0
> [ 2] 6.00-7.00 sec 1.05 MBytes 8.79 Mbits/sec
> 91.679/84.479/96.760/3.975 ms (9/122062) 93.8 KByte 11.98
> 472=472:0:0:0:0:0:0:0
> [ 3] 6.00-7.00 sec 1.06 MBytes 8.88 Mbits/sec
> 96.982/95.933/98.371/0.680 ms (9/123349) 100 KByte 11.45
> 477=477:0:0:0:0:0:0:0
> [ 1] 6.00-7.00 sec 1.05 MBytes 8.80 Mbits/sec
> 94.342/91.660/96.025/1.660 ms (9/122223) 96.7 KByte 11.66
> 494=494:0:0:0:0:0:0:0
> [SUM] 6.00-7.00 sec 4.19 MBytes 35.2 Mbits/sec
> 1928=1928:0:0:0:0:0:0:0
> [ 4] 7.00-8.00 sec 1.10 MBytes 9.25 Mbits/sec
> 92.515/88.182/96.351/2.538 ms (8/144466) 112 KByte 12.49
> 510=510:0:0:0:0:0:0:0
> [ 3] 7.00-8.00 sec 1.09 MBytes 9.13 Mbits/sec
> 96.580/95.737/98.977/1.098 ms (8/142656) 115 KByte 11.82
> 480=480:0:0:0:0:0:0:0
> [ 1] 7.00-8.00 sec 1.10 MBytes 9.21 Mbits/sec
> 95.269/91.719/97.514/2.126 ms (8/143923) 115 KByte 12.09
> 515=515:0:0:0:0:0:0:0
> [ 2] 7.00-8.00 sec 1.11 MBytes 9.29 Mbits/sec
> 90.073/84.700/96.176/4.324 ms (8/145190) 110 KByte 12.90
> 508=508:0:0:0:0:0:0:0
> [SUM] 7.00-8.00 sec 4.40 MBytes 36.9 Mbits/sec
> 2013=2013:0:0:0:0:0:0:0
>
> Bob
>
>>> -----Original Message-----
>>
>>> From: LibreQoS <libreqos-bounces@lists.bufferbloat.net> On Behalf Of
>> Dave Taht
>>
>>> via LibreQoS
>>
>>> Sent: Wednesday, January 4, 2023 12:26 PM
>>
>>> Subject: [LibreQoS] the grinch meets cloudflare's christmas present
>>
>>>
>>
>>> Please try the new, the shiny, the really wonderful test here:
>>
>>>
>> https://urldefense.com/v3/__https://speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S
>> [1]
>>
>>>
>> 9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>> [1]
>>
>>>
>>
>>> I would really appreciate some independent verification of
>>
>>> measurements using this tool. In my brief experiments it appears -
>> as
>>
>>> all the commercial tools to date - to dramatically understate the
>>
>>> bufferbloat, on my LTE, (and my starlink terminal is out being
>>
>>> hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>>
>> [acm]
>>
>> Hi Dave, I made some time to test "cloudflare's christmas present"
>> yesterday.
>>
>> I'm on DOCSIS 3.1 service with 1Gbps Down. The Upstream has a "turbo"
>> mode with 40-50Mbps for the first ~3 sec, then steady-state about
>> 23Mbps.
>>
>> When I saw the ~620Mbps Downstream measurement, I was ready to
>> complain that even the IP-Layer Capacity was grossly underestimated.
>> In addition, the Latency measurements seem very low (as you asserted),
>> although the cloud server was “nearby”.
>>
>> However, I found that Ookla and the ISP-provided measurement were also
>> reporting ~600Mbps! So the cloudflare Downstream capacity (or
>> throughput?) measurement was consistent with others. Our UDPST server
>> was unreachable, otherwise I would have added that measurement, too.
>>
>> The Upstream measurement graph seems to illustrate the “turbo”
>> mode, with the dip after attaining 44.5Mbps.
>>
>> UDPST saturates the uplink and we measure the full 250ms of the
>> Upstream buffer. Cloudflare’s latency measurements don’t even come
>> close.
>>
>> Al
>>
>>
>>
>> Links:
>> ------
>> [1]
>> https://urldefense.com/v3/__https:/speed.cloudflare.com/__;!!BhdT!iZcFJ8WVU9S9zz5t456oxkfObrC5Xb9j5AG8UO3DqD5x4GAJkawZr0iGwEUtF0_09U8mCDnAkrJ9QEMHGbCMKVw$
>> _______________________________________________
>> Rpm mailing list
>> Rpm@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
^ permalink raw reply [flat|nested] 49+ messages in thread
* [Starlink] insanely great waveform result for starlink
[not found] ` <CAKJdXWDOFbzsam2C_24e9DLkc18ed4uhV51hOKVjDipk1Uhc2g@mail.gmail.com>
@ 2023-01-13 4:08 ` Dave Taht
2023-01-13 4:26 ` Jonathan Bennett
0 siblings, 1 reply; 49+ messages in thread
From: Dave Taht @ 2023-01-13 4:08 UTC (permalink / raw)
To: Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 2089 bytes --]
has anyone been testing theirs this week?
---------- Forwarded message ---------
From: Luis A. Cornejo <luis.a.cornejo@gmail.com>
Date: Thu, Jan 12, 2023 at 7:30 PM
Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
cloudflare'schristmas present
To: MORTON JR., AL <acmorton@att.com>
Cc: Dave Taht <dave.taht@gmail.com>, Jay Moran <jay@tp.org>, Cake List
<cake@lists.bufferbloat.net>, IETF IPPM WG <ippm@ietf.org>, Rpm
<rpm@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>,
dickroy@alum.mit.edu <dickroy@alum.mit.edu>, libreqos
<libreqos@lists.bufferbloat.net>
Well Reddit has many posts talking about noticeable performance
increases for Starlink. Here is a primetime run:
waveform:
https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
cloudflare attached
On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
>
> Dave and Luis,
>
> Do you know if any of these tools are using ~random payloads, to defeat compression?
>
> UDPST has a CLI option:
> (m) -X Randomize datagram payload (else zeroes)
>
> When I used this option testing shipboard satellite access, download was about 115kbps.
>
> Al
>
> > -----Original Message-----
> > From: Dave Taht <dave.taht@gmail.com>
> > Sent: Thursday, January 12, 2023 11:12 AM
> > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>; IETF IPPM
> > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> > cloudflare'schristmas present
> >
> > Either starlink has vastly improved, or the test is way off in this case.
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #2: Web capture_12-1-2023_212657_speed.cloudflare.com.jpeg --]
[-- Type: image/jpeg, Size: 181599 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 4:08 ` [Starlink] insanely great waveform result for starlink Dave Taht
@ 2023-01-13 4:26 ` Jonathan Bennett
2023-01-13 5:13 ` Ulrich Speidel
0 siblings, 1 reply; 49+ messages in thread
From: Jonathan Bennett @ 2023-01-13 4:26 UTC (permalink / raw)
To: Dave Taht; +Cc: Dave Taht via Starlink
[-- Attachment #1.1: Type: text/plain, Size: 2766 bytes --]
Just ran a Waveform test, and ping times are still terrible while maxing
out download speeds. Upload saturation doesn't seem to affect latency,
though. Also interesting, it's giving me a valid ipv6 block again.
[image: image.png]
[image: image.png]
Jonathan Bennett
Hackaday.com
On Thu, Jan 12, 2023 at 10:08 PM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:
> has anyone been testing theirs this week?
>
> ---------- Forwarded message ---------
> From: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> Date: Thu, Jan 12, 2023 at 7:30 PM
> Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> cloudflare'schristmas present
> To: MORTON JR., AL <acmorton@att.com>
> Cc: Dave Taht <dave.taht@gmail.com>, Jay Moran <jay@tp.org>, Cake List
> <cake@lists.bufferbloat.net>, IETF IPPM WG <ippm@ietf.org>, Rpm
> <rpm@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>,
> dickroy@alum.mit.edu <dickroy@alum.mit.edu>, libreqos
> <libreqos@lists.bufferbloat.net>
>
>
> Well Reddit has many posts talking about noticeable performance
> increases for Starlink. Here is a primetime run:
>
> waveform:
>
> https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
>
> cloudflare attached
>
>
>
> On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
> >
> > Dave and Luis,
> >
> > Do you know if any of these tools are using ~random payloads, to defeat
> compression?
> >
> > UDPST has a CLI option:
> > (m) -X Randomize datagram payload (else zeroes)
> >
> > When I used this option testing shipboard satellite access, download was
> about 115kbps.
> >
> > Al
> >
> > > -----Original Message-----
> > > From: Dave Taht <dave.taht@gmail.com>
> > > Sent: Thursday, January 12, 2023 11:12 AM
> > > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> > > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>;
> IETF IPPM
> > > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> > > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> > > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> > > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> > > cloudflare'schristmas present
> > >
> > > Either starlink has vastly improved, or the test is way off in this
> case.
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #1.2: Type: text/html, Size: 5499 bytes --]
[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 235280 bytes --]
[-- Attachment #3: image.png --]
[-- Type: image/png, Size: 298232 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 4:26 ` Jonathan Bennett
@ 2023-01-13 5:13 ` Ulrich Speidel
2023-01-13 5:25 ` Dave Taht
2023-01-13 12:27 ` Ulrich Speidel
0 siblings, 2 replies; 49+ messages in thread
From: Ulrich Speidel @ 2023-01-13 5:13 UTC (permalink / raw)
To: starlink
[-- Attachment #1: Type: text/plain, Size: 3984 bytes --]
From Auckland, New Zealand, using a roaming subscription, it puts me in
touch with a server 2000 km away. OK then:
IP address: nix six.
My thoughts shall follow later.
On 13/01/2023 5:26 pm, Jonathan Bennett via Starlink wrote:
> Just ran a Waveform test, and ping times are still terrible
> while maxing out download speeds. Upload saturation doesn't seem to
> affect latency, though. Also interesting, it's giving me a valid ipv6
> block again.
>
> image.png
> image.png
> Jonathan Bennett
> Hackaday.com
>
>
> On Thu, Jan 12, 2023 at 10:08 PM Dave Taht via Starlink
> <starlink@lists.bufferbloat.net> wrote:
>
> has anyone been testing theirs this week?
>
> ---------- Forwarded message ---------
> From: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> Date: Thu, Jan 12, 2023 at 7:30 PM
> Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> cloudflare'schristmas present
> To: MORTON JR., AL <acmorton@att.com>
> Cc: Dave Taht <dave.taht@gmail.com>, Jay Moran <jay@tp.org>, Cake List
> <cake@lists.bufferbloat.net>, IETF IPPM WG <ippm@ietf.org>, Rpm
> <rpm@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>,
> dickroy@alum.mit.edu <dickroy@alum.mit.edu>, libreqos
> <libreqos@lists.bufferbloat.net>
>
>
> Well Reddit has many posts talking about noticeable performance
> increases for Starlink. Here is a primetime run:
>
> waveform:
> https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
> <https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d>
>
> cloudflare attached
>
>
>
> On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com>
> wrote:
> >
> > Dave and Luis,
> >
> > Do you know if any of these tools are using ~random payloads, to
> defeat compression?
> >
> > UDPST has a CLI option:
> > (m) -X Randomize datagram payload (else zeroes)
> >
> > When I used this option testing shipboard satellite access,
> download was about 115kbps.
> >
> > Al
> >
> > > -----Original Message-----
> > > From: Dave Taht <dave.taht@gmail.com>
> > > Sent: Thursday, January 12, 2023 11:12 AM
> > > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
> > > Cc: Jay Moran <jay@tp.org>; Cake List
> <cake@lists.bufferbloat.net>; IETF IPPM
> > > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
> > > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
> > > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
> > > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
> > > cloudflare'schristmas present
> > >
> > > Either starlink has vastly improved, or the test is way off in
> this case.
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> <https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz>
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> <https://lists.bufferbloat.net/listinfo/starlink>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
****************************************************************
Dr. Ulrich Speidel
School of Computer Science
Room 303S.594 (City Campus)
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************
[-- Attachment #2.1: Type: text/html, Size: 10059 bytes --]
[-- Attachment #2.2: sPzzvwIXE4go16tN.png --]
[-- Type: image/png, Size: 109764 bytes --]
[-- Attachment #2.3: OMUOBp06rXY0I8Ws.png --]
[-- Type: image/png, Size: 92859 bytes --]
[-- Attachment #2.4: image.png --]
[-- Type: image/png, Size: 235280 bytes --]
[-- Attachment #2.5: image.png --]
[-- Type: image/png, Size: 298232 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 5:13 ` Ulrich Speidel
@ 2023-01-13 5:25 ` Dave Taht
2023-01-13 12:27 ` Ulrich Speidel
1 sibling, 0 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-13 5:25 UTC (permalink / raw)
To: Ulrich Speidel; +Cc: starlink
[-- Attachment #1.1: Type: text/plain, Size: 4136 bytes --]
both of you are showing insanely great upload numbers.
On Thu, Jan 12, 2023 at 9:14 PM Ulrich Speidel via Starlink <
starlink@lists.bufferbloat.net> wrote:
> From Auckland, New Zealand, using a roaming subscription, it puts me in
> touch with a server 2000 km away. OK then:
>
>
> IP address: nix six.
>
> My thoughts shall follow later.
>
> On 13/01/2023 5:26 pm, Jonathan Bennett via Starlink wrote:
>
> Just ran a Waveform test, and ping times are still terrible while maxing
> out download speeds. Upload saturation doesn't seem to affect latency,
> though. Also interesting, it's giving me a valid ipv6 block again.
>
> [image: image.png]
> [image: image.png]
> Jonathan Bennett
> Hackaday.com
>
>
> On Thu, Jan 12, 2023 at 10:08 PM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> has anyone been testing theirs this week?
>>
>> ---------- Forwarded message ---------
>> From: Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> Date: Thu, Jan 12, 2023 at 7:30 PM
>> Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
>> cloudflare'schristmas present
>> To: MORTON JR., AL <acmorton@att.com>
>> Cc: Dave Taht <dave.taht@gmail.com>, Jay Moran <jay@tp.org>, Cake List
>> <cake@lists.bufferbloat.net>, IETF IPPM WG <ippm@ietf.org>, Rpm
>> <rpm@lists.bufferbloat.net>, bloat <bloat@lists.bufferbloat.net>,
>> dickroy@alum.mit.edu <dickroy@alum.mit.edu>, libreqos
>> <libreqos@lists.bufferbloat.net>
>>
>>
>> Well Reddit has many posts talking about noticeable performance
>> increases for Starlink. Here is a primetime run:
>>
>> waveform:
>>
>> https://www.waveform.com/tools/bufferbloat?test-id=333f97c7-7cbd-406c-8d9a-9f850cb5de7d
>>
>> cloudflare attached
>>
>>
>>
>> On Thu, Jan 12, 2023 at 11:43 AM MORTON JR., AL <acmorton@att.com> wrote:
>> >
>> > Dave and Luis,
>> >
>> > Do you know if any of these tools are using ~random payloads, to defeat
>> compression?
>> >
>> > UDPST has a CLI option:
>> > (m) -X Randomize datagram payload (else zeroes)
>> >
>> > When I used this option testing shipboard satellite access, download
>> was about 115kbps.
>> >
>> > Al
>> >
>> > > -----Original Message-----
>> > > From: Dave Taht <dave.taht@gmail.com>
>> > > Sent: Thursday, January 12, 2023 11:12 AM
>> > > To: Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> > > Cc: Jay Moran <jay@tp.org>; Cake List <cake@lists.bufferbloat.net>;
>> IETF IPPM
>> > > WG <ippm@ietf.org>; MORTON JR., AL <acmorton@att.com>; Rpm
>> > > <rpm@lists.bufferbloat.net>; bloat <bloat@lists.bufferbloat.net>;
>> > > dickroy@alum.mit.edu; libreqos <libreqos@lists.bufferbloat.net>
>> > > Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
>> > > cloudflare'schristmas present
>> > >
>> > > Either starlink has vastly improved, or the test is way off in this
>> case.
>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
> _______________________________________________
> Starlink mailing listStarlink@lists.bufferbloat.nethttps://lists.bufferbloat.net/listinfo/starlink
>
> --
> ****************************************************************
> Dr. Ulrich Speidel
>
> School of Computer Science
>
> Room 303S.594 (City Campus)
>
> The University of Aucklandu.speidel@auckland.ac.nz http://www.cs.auckland.ac.nz/~ulrich/
> ****************************************************************
>
>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 9289 bytes --]
[-- Attachment #2: sPzzvwIXE4go16tN.png --]
[-- Type: image/png, Size: 109764 bytes --]
[-- Attachment #3: OMUOBp06rXY0I8Ws.png --]
[-- Type: image/png, Size: 92859 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 235280 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 298232 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 5:13 ` Ulrich Speidel
2023-01-13 5:25 ` Dave Taht
@ 2023-01-13 12:27 ` Ulrich Speidel
2023-01-13 17:02 ` Jonathan Bennett
1 sibling, 1 reply; 49+ messages in thread
From: Ulrich Speidel @ 2023-01-13 12:27 UTC (permalink / raw)
To: starlink
On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>
> From Auckland, New Zealand, using a roaming subscription, it puts me
> in touch with a server 2000 km away. OK then:
>
>
> IP address: nix six.
>
> My thoughts shall follow later.
OK, so here we go.
I'm always a bit skeptical when it comes to speed tests - they're really
laden with so many caveats that it's not funny. I took our new work
Starlink kit home in December to give it a try and the other day finally
got around to set it up. It's on a roaming subscription because our
badly built-up campus really isn't ideal in terms of a clear view of the
sky. Oh - and did I mention that I used the Starlink Ethernet adapter,
not the WiFi?
Caveat 1: Location, location. I live in a place where the best Starlink
promises is about 1/3 in terms of data rate you can actually get from
fibre to the home at under half of Starlink's price. Read: There are few
Starlink users around. I might be the only one in my suburb.
Caveat 2: Auckland has three Starlink gateways close by: Clevedon (which
is at a stretch daytrip cycling distance from here), Te Hana and Puwera,
the most distant of the three and about 130 km away from me as the crow
flies. Read: My dishy can use any satellite that any of these three can
see, and then depending on where I put it and how much of the southern
sky it can see, maybe also the one in Hinds, 840 km away, although that
is obviously stretching it a bit. Either way, that's plenty of options
for my bits to travel without needing a lot of handovers. Why? Easy: If
your nearest teleport is close by, then the set of satellites that the
teleport can see and the set that you can see is almost the same, so you
can essentially stick with the same satellite while it's in view for you
because it'll also be in view for the teleport. Pretty much any bird
above you will do.
And because I don't get a lot of competition from other users in my area
vying for one of the few available satellites that can see both us and
the teleport, this is about as good as it gets at 37S latitude. If I'd
want it any better, I'd have to move a lot further south.
It'd be interesting to hear from Jonathan what the availability of home
broadband is like in the Dallas area. I note that it's at a lower
latitude (33N) than Auckland, but the difference isn't huge. I notice
two teleports each about 160 km away, which is also not too bad. I also
note Starlink availability in the area is restricted at the moment -
oversubscribed? But if Jonathan gets good data rates, then that means
that competition for bird capacity can't be too bad - for whatever reason.
Caveat 3: Backhaul. There isn't just one queue between me and whatever I
talk to in terms of my communications. Traceroute shows about 10 hops
between me and the University of Auckland via Starlink. That's 10
queues, not one. Many of them will have cross traffic. So it's a bit
hard to tell where our packets really get to wait or where they get
dropped. The insidious bit here is that a lot of them will be between 1
Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
turn into bottlenecks. This isn't like a narrowband GEO link of a few
Mb/s where it's obvious where the dominant long latency bottleneck in
your TCP connection's path is. Read: It's pretty hard to tell whether a
drop in "speed" is due to a performance issue in the Starlink system or
somewhere between Starlink's systems and the target system.
I see RTTs here between 20 ms and 250 ms, where the physical latency
should be under 15 ms. So there's clearly a bit of buffer here along the
chain that occasionally fills up.
Caveat 4: Handovers. Handover between birds and teleports is inevitably
associated with a change in RTT and in most cases also available
bandwidth. Plus your packets now arrive at a new queue on a new
satellite while your TCP is still trying to respond to whatever it
thought the queue on the previous bird was doing. Read: Whatever your
cwnd is immediately after a handover, it's probably not what it should be.
I ran a somewhat hamstrung (sky view restricted) set of four Ookla
speedtest.net tests each to five local servers. Average upload rate was
13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
Starlink seems to be buying its local connectivity from (Vocus Group)
varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
observed. In fact, they were the ISP with consistently the worst rates.
Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
with rates. Average RTT was the same as for Vocus.
Note the variation though: More or less a factor of two between highest
and lowest rates for each ISP. Did MyRepublic just get lucky in my
tests? Or is there something systematic behind this? Way too few tests
to tell.
What these tests do is establish a ballpark.
I'm currently repeating tests with dish placed on a trestle closer to
the heavens. This seems to have translated into fewer outages / ping
losses (around 1/4 of what I had yesterday with dishy on the ground on
my deck). Still good enough for a lengthy video Skype call with my folks
in Germany, although they did comment about reduced video quality. But
maybe that was the lighting or the different background as I wasn't in
my usual spot with my laptop when I called them.
--
****************************************************************
Dr. Ulrich Speidel
School of Computer Science
Room 303S.594 (City Campus)
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 12:27 ` Ulrich Speidel
@ 2023-01-13 17:02 ` Jonathan Bennett
2023-01-13 17:26 ` Dave Taht
2023-01-13 22:51 ` Ulrich Speidel
0 siblings, 2 replies; 49+ messages in thread
From: Jonathan Bennett @ 2023-01-13 17:02 UTC (permalink / raw)
To: Ulrich Speidel; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 6764 bytes --]
On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
starlink@lists.bufferbloat.net> wrote:
> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
> >
> > From Auckland, New Zealand, using a roaming subscription, it puts me
> > in touch with a server 2000 km away. OK then:
> >
> >
> > IP address: nix six.
> >
> > My thoughts shall follow later.
>
> OK, so here we go.
>
> I'm always a bit skeptical when it comes to speed tests - they're really
> laden with so many caveats that it's not funny. I took our new work
> Starlink kit home in December to give it a try and the other day finally
> got around to set it up. It's on a roaming subscription because our
> badly built-up campus really isn't ideal in terms of a clear view of the
> sky. Oh - and did I mention that I used the Starlink Ethernet adapter,
> not the WiFi?
>
> Caveat 1: Location, location. I live in a place where the best Starlink
> promises is about 1/3 in terms of data rate you can actually get from
> fibre to the home at under half of Starlink's price. Read: There are few
> Starlink users around. I might be the only one in my suburb.
>
> Caveat 2: Auckland has three Starlink gateways close by: Clevedon (which
> is at a stretch daytrip cycling distance from here), Te Hana and Puwera,
> the most distant of the three and about 130 km away from me as the crow
> flies. Read: My dishy can use any satellite that any of these three can
> see, and then depending on where I put it and how much of the southern
> sky it can see, maybe also the one in Hinds, 840 km away, although that
> is obviously stretching it a bit. Either way, that's plenty of options
> for my bits to travel without needing a lot of handovers. Why? Easy: If
> your nearest teleport is close by, then the set of satellites that the
> teleport can see and the set that you can see is almost the same, so you
> can essentially stick with the same satellite while it's in view for you
> because it'll also be in view for the teleport. Pretty much any bird
> above you will do.
>
> And because I don't get a lot of competition from other users in my area
> vying for one of the few available satellites that can see both us and
> the teleport, this is about as good as it gets at 37S latitude. If I'd
> want it any better, I'd have to move a lot further south.
>
> It'd be interesting to hear from Jonathan what the availability of home
> broadband is like in the Dallas area. I note that it's at a lower
> latitude (33N) than Auckland, but the difference isn't huge. I notice
> two teleports each about 160 km away, which is also not too bad. I also
> note Starlink availability in the area is restricted at the moment -
> oversubscribed? But if Jonathan gets good data rates, then that means
> that competition for bird capacity can't be too bad - for whatever reason.
>
I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway. In
cities, like Dallas, and Lawton where I live, there are good broadband
options. But there are also many people that live outside cities, and the
options are much worse. The low density userbase in rural Oklahoma and
Texas is probably ideal conditions for Starlink.
>
> Caveat 3: Backhaul. There isn't just one queue between me and whatever I
> talk to in terms of my communications. Traceroute shows about 10 hops
> between me and the University of Auckland via Starlink. That's 10
> queues, not one. Many of them will have cross traffic. So it's a bit
> hard to tell where our packets really get to wait or where they get
> dropped. The insidious bit here is that a lot of them will be between 1
> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
> turn into bottlenecks. This isn't like a narrowband GEO link of a few
> Mb/s where it's obvious where the dominant long latency bottleneck in
> your TCP connection's path is. Read: It's pretty hard to tell whether a
> drop in "speed" is due to a performance issue in the Starlink system or
> somewhere between Starlink's systems and the target system.
>
> I see RTTs here between 20 ms and 250 ms, where the physical latency
> should be under 15 ms. So there's clearly a bit of buffer here along the
> chain that occasionally fills up.
>
> Caveat 4: Handovers. Handover between birds and teleports is inevitably
> associated with a change in RTT and in most cases also available
> bandwidth. Plus your packets now arrive at a new queue on a new
> satellite while your TCP is still trying to respond to whatever it
> thought the queue on the previous bird was doing. Read: Whatever your
> cwnd is immediately after a handover, it's probably not what it should be.
>
> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
> speedtest.net tests each to five local servers. Average upload rate was
> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
> Starlink seems to be buying its local connectivity from (Vocus Group)
> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
> observed. In fact, they were the ISP with consistently the worst rates.
>
> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
> with rates. Average RTT was the same as for Vocus.
>
> Note the variation though: More or less a factor of two between highest
> and lowest rates for each ISP. Did MyRepublic just get lucky in my
> tests? Or is there something systematic behind this? Way too few tests
> to tell.
>
> What these tests do is establish a ballpark.
>
> I'm currently repeating tests with dish placed on a trestle closer to
> the heavens. This seems to have translated into fewer outages / ping
> losses (around 1/4 of what I had yesterday with dishy on the ground on
> my deck). Still good enough for a lengthy video Skype call with my folks
> in Germany, although they did comment about reduced video quality. But
> maybe that was the lighting or the different background as I wasn't in
> my usual spot with my laptop when I called them.
>
Clear view of the sky is king for Starlink reliability. I've got my dishy
mounted on the back fence, looking up over an empty field, so it's pretty
much best-case scenario here.
>
> --
>
> ****************************************************************
> Dr. Ulrich Speidel
>
> School of Computer Science
>
> Room 303S.594 (City Campus)
>
> The University of Auckland
> u.speidel@auckland.ac.nz
> http://www.cs.auckland.ac.nz/~ulrich/
> ****************************************************************
>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 8429 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 17:02 ` Jonathan Bennett
@ 2023-01-13 17:26 ` Dave Taht
2023-01-13 17:41 ` Jonathan Bennett
2023-01-13 20:25 ` Pat Jensen
2023-01-13 22:51 ` Ulrich Speidel
1 sibling, 2 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-13 17:26 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Ulrich Speidel, starlink
packet caps would be nice... all this is very exciting news.
I'd so love for one or more of y'all reporting such great uplink
results nowadays to duplicate and re-plot the original irtt tests we
did:
irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o whatever.json
They MUST have changed their scheduling to get such amazing uplink
results, in addition to better queue management.
(for the record, my servers are de, london, fremont, sydney, dallas,
newark, atlanta, singapore, mumbai)
There's an R and gnuplot script for plotting that output around here
somewhere (I have largely personally put down the starlink project,
loaning out mine) - that went by on this list... I should have written
a blog entry so I can find that stuff again.
On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
>
> On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>> >
>> > From Auckland, New Zealand, using a roaming subscription, it puts me
>> > in touch with a server 2000 km away. OK then:
>> >
>> >
>> > IP address: nix six.
>> >
>> > My thoughts shall follow later.
>>
>> OK, so here we go.
>>
>> I'm always a bit skeptical when it comes to speed tests - they're really
>> laden with so many caveats that it's not funny. I took our new work
>> Starlink kit home in December to give it a try and the other day finally
>> got around to set it up. It's on a roaming subscription because our
>> badly built-up campus really isn't ideal in terms of a clear view of the
>> sky. Oh - and did I mention that I used the Starlink Ethernet adapter,
>> not the WiFi?
>>
>> Caveat 1: Location, location. I live in a place where the best Starlink
>> promises is about 1/3 in terms of data rate you can actually get from
>> fibre to the home at under half of Starlink's price. Read: There are few
>> Starlink users around. I might be the only one in my suburb.
>>
>> Caveat 2: Auckland has three Starlink gateways close by: Clevedon (which
>> is at a stretch daytrip cycling distance from here), Te Hana and Puwera,
>> the most distant of the three and about 130 km away from me as the crow
>> flies. Read: My dishy can use any satellite that any of these three can
>> see, and then depending on where I put it and how much of the southern
>> sky it can see, maybe also the one in Hinds, 840 km away, although that
>> is obviously stretching it a bit. Either way, that's plenty of options
>> for my bits to travel without needing a lot of handovers. Why? Easy: If
>> your nearest teleport is close by, then the set of satellites that the
>> teleport can see and the set that you can see is almost the same, so you
>> can essentially stick with the same satellite while it's in view for you
>> because it'll also be in view for the teleport. Pretty much any bird
>> above you will do.
>>
>> And because I don't get a lot of competition from other users in my area
>> vying for one of the few available satellites that can see both us and
>> the teleport, this is about as good as it gets at 37S latitude. If I'd
>> want it any better, I'd have to move a lot further south.
>>
>> It'd be interesting to hear from Jonathan what the availability of home
>> broadband is like in the Dallas area. I note that it's at a lower
>> latitude (33N) than Auckland, but the difference isn't huge. I notice
>> two teleports each about 160 km away, which is also not too bad. I also
>> note Starlink availability in the area is restricted at the moment -
>> oversubscribed? But if Jonathan gets good data rates, then that means
>> that competition for bird capacity can't be too bad - for whatever reason.
>
> I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway. In cities, like Dallas, and Lawton where I live, there are good broadband options. But there are also many people that live outside cities, and the options are much worse. The low density userbase in rural Oklahoma and Texas is probably ideal conditions for Starlink.
>>
>>
>> Caveat 3: Backhaul. There isn't just one queue between me and whatever I
>> talk to in terms of my communications. Traceroute shows about 10 hops
>> between me and the University of Auckland via Starlink. That's 10
>> queues, not one. Many of them will have cross traffic. So it's a bit
>> hard to tell where our packets really get to wait or where they get
>> dropped. The insidious bit here is that a lot of them will be between 1
>> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
>> turn into bottlenecks. This isn't like a narrowband GEO link of a few
>> Mb/s where it's obvious where the dominant long latency bottleneck in
>> your TCP connection's path is. Read: It's pretty hard to tell whether a
>> drop in "speed" is due to a performance issue in the Starlink system or
>> somewhere between Starlink's systems and the target system.
>>
>> I see RTTs here between 20 ms and 250 ms, where the physical latency
>> should be under 15 ms. So there's clearly a bit of buffer here along the
>> chain that occasionally fills up.
>>
>> Caveat 4: Handovers. Handover between birds and teleports is inevitably
>> associated with a change in RTT and in most cases also available
>> bandwidth. Plus your packets now arrive at a new queue on a new
>> satellite while your TCP is still trying to respond to whatever it
>> thought the queue on the previous bird was doing. Read: Whatever your
>> cwnd is immediately after a handover, it's probably not what it should be.
>>
>> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>> speedtest.net tests each to five local servers. Average upload rate was
>> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
>> Starlink seems to be buying its local connectivity from (Vocus Group)
>> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>> observed. In fact, they were the ISP with consistently the worst rates.
>>
>> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>> with rates. Average RTT was the same as for Vocus.
>>
>> Note the variation though: More or less a factor of two between highest
>> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>> tests? Or is there something systematic behind this? Way too few tests
>> to tell.
>>
>> What these tests do is establish a ballpark.
>>
>> I'm currently repeating tests with dish placed on a trestle closer to
>> the heavens. This seems to have translated into fewer outages / ping
>> losses (around 1/4 of what I had yesterday with dishy on the ground on
>> my deck). Still good enough for a lengthy video Skype call with my folks
>> in Germany, although they did comment about reduced video quality. But
>> maybe that was the lighting or the different background as I wasn't in
>> my usual spot with my laptop when I called them.
>
> Clear view of the sky is king for Starlink reliability. I've got my dishy mounted on the back fence, looking up over an empty field, so it's pretty much best-case scenario here.
>>
>>
>> --
>>
>> ****************************************************************
>> Dr. Ulrich Speidel
>>
>> School of Computer Science
>>
>> Room 303S.594 (City Campus)
>>
>> The University of Auckland
>> u.speidel@auckland.ac.nz
>> http://www.cs.auckland.ac.nz/~ulrich/
>> ****************************************************************
>>
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 17:26 ` Dave Taht
@ 2023-01-13 17:41 ` Jonathan Bennett
2023-01-13 18:09 ` Nathan Owens
2023-01-13 20:25 ` Pat Jensen
1 sibling, 1 reply; 49+ messages in thread
From: Jonathan Bennett @ 2023-01-13 17:41 UTC (permalink / raw)
To: Dave Taht; +Cc: Ulrich Speidel, starlink
[-- Attachment #1: Type: text/plain, Size: 8889 bytes --]
The irtt command, run with normal, light usage:
https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
Jonathan Bennett
Hackaday.com
On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
> packet caps would be nice... all this is very exciting news.
>
> I'd so love for one or more of y'all reporting such great uplink
> results nowadays to duplicate and re-plot the original irtt tests we
> did:
>
> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
> whatever.json
>
> They MUST have changed their scheduling to get such amazing uplink
> results, in addition to better queue management.
>
> (for the record, my servers are de, london, fremont, sydney, dallas,
> newark, atlanta, singapore, mumbai)
>
> There's an R and gnuplot script for plotting that output around here
> somewhere (I have largely personally put down the starlink project,
> loaning out mine) - that went by on this list... I should have written
> a blog entry so I can find that stuff again.
>
> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> >
> >
> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
> starlink@lists.bufferbloat.net> wrote:
> >>
> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
> >> >
> >> > From Auckland, New Zealand, using a roaming subscription, it puts me
> >> > in touch with a server 2000 km away. OK then:
> >> >
> >> >
> >> > IP address: nix six.
> >> >
> >> > My thoughts shall follow later.
> >>
> >> OK, so here we go.
> >>
> >> I'm always a bit skeptical when it comes to speed tests - they're really
> >> laden with so many caveats that it's not funny. I took our new work
> >> Starlink kit home in December to give it a try and the other day finally
> >> got around to set it up. It's on a roaming subscription because our
> >> badly built-up campus really isn't ideal in terms of a clear view of the
> >> sky. Oh - and did I mention that I used the Starlink Ethernet adapter,
> >> not the WiFi?
> >>
> >> Caveat 1: Location, location. I live in a place where the best Starlink
> >> promises is about 1/3 in terms of data rate you can actually get from
> >> fibre to the home at under half of Starlink's price. Read: There are few
> >> Starlink users around. I might be the only one in my suburb.
> >>
> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon (which
> >> is at a stretch daytrip cycling distance from here), Te Hana and Puwera,
> >> the most distant of the three and about 130 km away from me as the crow
> >> flies. Read: My dishy can use any satellite that any of these three can
> >> see, and then depending on where I put it and how much of the southern
> >> sky it can see, maybe also the one in Hinds, 840 km away, although that
> >> is obviously stretching it a bit. Either way, that's plenty of options
> >> for my bits to travel without needing a lot of handovers. Why? Easy: If
> >> your nearest teleport is close by, then the set of satellites that the
> >> teleport can see and the set that you can see is almost the same, so you
> >> can essentially stick with the same satellite while it's in view for you
> >> because it'll also be in view for the teleport. Pretty much any bird
> >> above you will do.
> >>
> >> And because I don't get a lot of competition from other users in my area
> >> vying for one of the few available satellites that can see both us and
> >> the teleport, this is about as good as it gets at 37S latitude. If I'd
> >> want it any better, I'd have to move a lot further south.
> >>
> >> It'd be interesting to hear from Jonathan what the availability of home
> >> broadband is like in the Dallas area. I note that it's at a lower
> >> latitude (33N) than Auckland, but the difference isn't huge. I notice
> >> two teleports each about 160 km away, which is also not too bad. I also
> >> note Starlink availability in the area is restricted at the moment -
> >> oversubscribed? But if Jonathan gets good data rates, then that means
> >> that competition for bird capacity can't be too bad - for whatever
> reason.
> >
> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway. In
> cities, like Dallas, and Lawton where I live, there are good broadband
> options. But there are also many people that live outside cities, and the
> options are much worse. The low density userbase in rural Oklahoma and
> Texas is probably ideal conditions for Starlink.
> >>
> >>
> >> Caveat 3: Backhaul. There isn't just one queue between me and whatever I
> >> talk to in terms of my communications. Traceroute shows about 10 hops
> >> between me and the University of Auckland via Starlink. That's 10
> >> queues, not one. Many of them will have cross traffic. So it's a bit
> >> hard to tell where our packets really get to wait or where they get
> >> dropped. The insidious bit here is that a lot of them will be between 1
> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
> >> turn into bottlenecks. This isn't like a narrowband GEO link of a few
> >> Mb/s where it's obvious where the dominant long latency bottleneck in
> >> your TCP connection's path is. Read: It's pretty hard to tell whether a
> >> drop in "speed" is due to a performance issue in the Starlink system or
> >> somewhere between Starlink's systems and the target system.
> >>
> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
> >> should be under 15 ms. So there's clearly a bit of buffer here along the
> >> chain that occasionally fills up.
> >>
> >> Caveat 4: Handovers. Handover between birds and teleports is inevitably
> >> associated with a change in RTT and in most cases also available
> >> bandwidth. Plus your packets now arrive at a new queue on a new
> >> satellite while your TCP is still trying to respond to whatever it
> >> thought the queue on the previous bird was doing. Read: Whatever your
> >> cwnd is immediately after a handover, it's probably not what it should
> be.
> >>
> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
> >> speedtest.net tests each to five local servers. Average upload rate was
> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
> >> Starlink seems to be buying its local connectivity from (Vocus Group)
> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
> >> observed. In fact, they were the ISP with consistently the worst rates.
> >>
> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
> >> with rates. Average RTT was the same as for Vocus.
> >>
> >> Note the variation though: More or less a factor of two between highest
> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
> >> tests? Or is there something systematic behind this? Way too few tests
> >> to tell.
> >>
> >> What these tests do is establish a ballpark.
> >>
> >> I'm currently repeating tests with dish placed on a trestle closer to
> >> the heavens. This seems to have translated into fewer outages / ping
> >> losses (around 1/4 of what I had yesterday with dishy on the ground on
> >> my deck). Still good enough for a lengthy video Skype call with my folks
> >> in Germany, although they did comment about reduced video quality. But
> >> maybe that was the lighting or the different background as I wasn't in
> >> my usual spot with my laptop when I called them.
> >
> > Clear view of the sky is king for Starlink reliability. I've got my
> dishy mounted on the back fence, looking up over an empty field, so it's
> pretty much best-case scenario here.
> >>
> >>
> >> --
> >>
> >> ****************************************************************
> >> Dr. Ulrich Speidel
> >>
> >> School of Computer Science
> >>
> >> Room 303S.594 (City Campus)
> >>
> >> The University of Auckland
> >> u.speidel@auckland.ac.nz
> >> http://www.cs.auckland.ac.nz/~ulrich/
> >> ****************************************************************
> >>
> >>
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
[-- Attachment #2: Type: text/html, Size: 11664 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 17:41 ` Jonathan Bennett
@ 2023-01-13 18:09 ` Nathan Owens
2023-01-13 20:30 ` Nathan Owens
0 siblings, 1 reply; 49+ messages in thread
From: Nathan Owens @ 2023-01-13 18:09 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Dave Taht, starlink
[-- Attachment #1: Type: text/plain, Size: 9488 bytes --]
I’ll run my visualization code on this result this afternoon and report
back!
On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
starlink@lists.bufferbloat.net> wrote:
> The irtt command, run with normal, light usage:
> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>
> Jonathan Bennett
> Hackaday.com
>
>
> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
>
>> packet caps would be nice... all this is very exciting news.
>>
>> I'd so love for one or more of y'all reporting such great uplink
>> results nowadays to duplicate and re-plot the original irtt tests we
>> did:
>>
>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>> whatever.json
>>
>> They MUST have changed their scheduling to get such amazing uplink
>> results, in addition to better queue management.
>>
>> (for the record, my servers are de, london, fremont, sydney, dallas,
>> newark, atlanta, singapore, mumbai)
>>
>> There's an R and gnuplot script for plotting that output around here
>> somewhere (I have largely personally put down the starlink project,
>> loaning out mine) - that went by on this list... I should have written
>> a blog entry so I can find that stuff again.
>>
>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> >
>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>> >>
>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>> >> >
>> >> > From Auckland, New Zealand, using a roaming subscription, it puts me
>> >> > in touch with a server 2000 km away. OK then:
>> >> >
>> >> >
>> >> > IP address: nix six.
>> >> >
>> >> > My thoughts shall follow later.
>> >>
>> >> OK, so here we go.
>> >>
>> >> I'm always a bit skeptical when it comes to speed tests - they're
>> really
>> >> laden with so many caveats that it's not funny. I took our new work
>> >> Starlink kit home in December to give it a try and the other day
>> finally
>> >> got around to set it up. It's on a roaming subscription because our
>> >> badly built-up campus really isn't ideal in terms of a clear view of
>> the
>> >> sky. Oh - and did I mention that I used the Starlink Ethernet adapter,
>> >> not the WiFi?
>> >>
>> >> Caveat 1: Location, location. I live in a place where the best Starlink
>> >> promises is about 1/3 in terms of data rate you can actually get from
>> >> fibre to the home at under half of Starlink's price. Read: There are
>> few
>> >> Starlink users around. I might be the only one in my suburb.
>> >>
>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>> (which
>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>> Puwera,
>> >> the most distant of the three and about 130 km away from me as the crow
>> >> flies. Read: My dishy can use any satellite that any of these three can
>> >> see, and then depending on where I put it and how much of the southern
>> >> sky it can see, maybe also the one in Hinds, 840 km away, although that
>> >> is obviously stretching it a bit. Either way, that's plenty of options
>> >> for my bits to travel without needing a lot of handovers. Why? Easy: If
>> >> your nearest teleport is close by, then the set of satellites that the
>> >> teleport can see and the set that you can see is almost the same, so
>> you
>> >> can essentially stick with the same satellite while it's in view for
>> you
>> >> because it'll also be in view for the teleport. Pretty much any bird
>> >> above you will do.
>> >>
>> >> And because I don't get a lot of competition from other users in my
>> area
>> >> vying for one of the few available satellites that can see both us and
>> >> the teleport, this is about as good as it gets at 37S latitude. If I'd
>> >> want it any better, I'd have to move a lot further south.
>> >>
>> >> It'd be interesting to hear from Jonathan what the availability of home
>> >> broadband is like in the Dallas area. I note that it's at a lower
>> >> latitude (33N) than Auckland, but the difference isn't huge. I notice
>> >> two teleports each about 160 km away, which is also not too bad. I also
>> >> note Starlink availability in the area is restricted at the moment -
>> >> oversubscribed? But if Jonathan gets good data rates, then that means
>> >> that competition for bird capacity can't be too bad - for whatever
>> reason.
>> >
>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
>> In cities, like Dallas, and Lawton where I live, there are good broadband
>> options. But there are also many people that live outside cities, and the
>> options are much worse. The low density userbase in rural Oklahoma and
>> Texas is probably ideal conditions for Starlink.
>> >>
>> >>
>> >> Caveat 3: Backhaul. There isn't just one queue between me and whatever
>> I
>> >> talk to in terms of my communications. Traceroute shows about 10 hops
>> >> between me and the University of Auckland via Starlink. That's 10
>> >> queues, not one. Many of them will have cross traffic. So it's a bit
>> >> hard to tell where our packets really get to wait or where they get
>> >> dropped. The insidious bit here is that a lot of them will be between 1
>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a few
>> >> Mb/s where it's obvious where the dominant long latency bottleneck in
>> >> your TCP connection's path is. Read: It's pretty hard to tell whether a
>> >> drop in "speed" is due to a performance issue in the Starlink system or
>> >> somewhere between Starlink's systems and the target system.
>> >>
>> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
>> >> should be under 15 ms. So there's clearly a bit of buffer here along
>> the
>> >> chain that occasionally fills up.
>> >>
>> >> Caveat 4: Handovers. Handover between birds and teleports is inevitably
>> >> associated with a change in RTT and in most cases also available
>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>> >> satellite while your TCP is still trying to respond to whatever it
>> >> thought the queue on the previous bird was doing. Read: Whatever your
>> >> cwnd is immediately after a handover, it's probably not what it should
>> be.
>> >>
>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>> >> speedtest.net tests each to five local servers. Average upload rate
>> was
>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
>> >> Starlink seems to be buying its local connectivity from (Vocus Group)
>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>> >> observed. In fact, they were the ISP with consistently the worst rates.
>> >>
>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>> >> with rates. Average RTT was the same as for Vocus.
>> >>
>> >> Note the variation though: More or less a factor of two between highest
>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>> >> tests? Or is there something systematic behind this? Way too few tests
>> >> to tell.
>> >>
>> >> What these tests do is establish a ballpark.
>> >>
>> >> I'm currently repeating tests with dish placed on a trestle closer to
>> >> the heavens. This seems to have translated into fewer outages / ping
>> >> losses (around 1/4 of what I had yesterday with dishy on the ground on
>> >> my deck). Still good enough for a lengthy video Skype call with my
>> folks
>> >> in Germany, although they did comment about reduced video quality. But
>> >> maybe that was the lighting or the different background as I wasn't in
>> >> my usual spot with my laptop when I called them.
>> >
>> > Clear view of the sky is king for Starlink reliability. I've got my
>> dishy mounted on the back fence, looking up over an empty field, so it's
>> pretty much best-case scenario here.
>> >>
>> >>
>> >> --
>> >>
>> >> ****************************************************************
>> >> Dr. Ulrich Speidel
>> >>
>> >> School of Computer Science
>> >>
>> >> Room 303S.594 (City Campus)
>> >>
>> >> The University of Auckland
>> >> u.speidel@auckland.ac.nz
>> >> http://www.cs.auckland.ac.nz/~ulrich/
>> >> ****************************************************************
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/starlink
>> >
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 12576 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 17:26 ` Dave Taht
2023-01-13 17:41 ` Jonathan Bennett
@ 2023-01-13 20:25 ` Pat Jensen
2023-01-13 20:40 ` Dave Taht
1 sibling, 1 reply; 49+ messages in thread
From: Pat Jensen @ 2023-01-13 20:25 UTC (permalink / raw)
To: Dave Taht; +Cc: Jonathan Bennett, starlink
Dave,
From Central California (using Starlink Los Angeles POP w/ live IPv6)
via fremont.starlink.taht.net at 12:20PM PST non-peak
Wired via 1000baseT with no other users on the network
Min Mean Median Max Stddev
--- ---- ------ --- ------
RTT 24.5ms 51.22ms 48.75ms 262.5ms 14.92ms
send delay 76.44ms 94.79ms 92.69ms 288.7ms 12.06ms
receive delay -54.57ms -43.58ms -44.87ms 7.67ms 6.21ms
IPDV (jitter) 92.5µs 4.39ms 2.98ms 80.71ms 4.26ms
send IPDV 2.06µs 3.85ms 2.99ms 80.76ms 3.25ms
receive IPDV 0s 1.5ms 36.5µs 49.72ms 3.18ms
send call time 6.46µs 34.9µs 1.32ms 15.9µs
timer error 0s 72.1µs 4.97ms 69.2µs
server proc. time 620ns 4.19µs 647µs 6.34µs
duration: 5m1s (wait 787.6ms)
packets sent/received: 99879/83786 (16.11% loss)
server packets received: 83897/99879 (16.00%/0.13% loss up/down)
late (out-of-order) pkts: 1 (0.00%)
bytes sent/received: 5992740/5027160
send/receive rate: 159.8 Kbps / 134.1 Kbps
packet length: 60 bytes
timer stats: 121/100000 (0.12%) missed, 2.40% error
patj@air-2 ~ % curl ipinfo.io
{
"ip": "98.97.140.145",
"hostname": "customer.lsancax1.pop.starlinkisp.net",
"city": "Los Angeles",
"region": "California",
"country": "US",
"loc": "34.0522,-118.2437",
"org": "AS14593 Space Exploration Technologies Corporation",
"postal": "90009",
"timezone": "America/Los_Angeles",
"readme": "https://ipinfo.io/missingauth"
}%
Pat
On 2023-01-13 09:26, Dave Taht via Starlink wrote:
> packet caps would be nice... all this is very exciting news.
>
> I'd so love for one or more of y'all reporting such great uplink
> results nowadays to duplicate and re-plot the original irtt tests we
> did:
>
> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
> whatever.json
>
> They MUST have changed their scheduling to get such amazing uplink
> results, in addition to better queue management.
>
> (for the record, my servers are de, london, fremont, sydney, dallas,
> newark, atlanta, singapore, mumbai)
>
> There's an R and gnuplot script for plotting that output around here
> somewhere (I have largely personally put down the starlink project,
> loaning out mine) - that went by on this list... I should have written
> a blog entry so I can find that stuff again.
>
> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
> <starlink@lists.bufferbloat.net> wrote:
>>
>>
>> On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>>>
>>> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>> >
>>> > From Auckland, New Zealand, using a roaming subscription, it puts me
>>> > in touch with a server 2000 km away. OK then:
>>> >
>>> >
>>> > IP address: nix six.
>>> >
>>> > My thoughts shall follow later.
>>>
>>> OK, so here we go.
>>>
>>> I'm always a bit skeptical when it comes to speed tests - they're
>>> really
>>> laden with so many caveats that it's not funny. I took our new work
>>> Starlink kit home in December to give it a try and the other day
>>> finally
>>> got around to set it up. It's on a roaming subscription because our
>>> badly built-up campus really isn't ideal in terms of a clear view of
>>> the
>>> sky. Oh - and did I mention that I used the Starlink Ethernet
>>> adapter,
>>> not the WiFi?
>>>
>>> Caveat 1: Location, location. I live in a place where the best
>>> Starlink
>>> promises is about 1/3 in terms of data rate you can actually get from
>>> fibre to the home at under half of Starlink's price. Read: There are
>>> few
>>> Starlink users around. I might be the only one in my suburb.
>>>
>>> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>> (which
>>> is at a stretch daytrip cycling distance from here), Te Hana and
>>> Puwera,
>>> the most distant of the three and about 130 km away from me as the
>>> crow
>>> flies. Read: My dishy can use any satellite that any of these three
>>> can
>>> see, and then depending on where I put it and how much of the
>>> southern
>>> sky it can see, maybe also the one in Hinds, 840 km away, although
>>> that
>>> is obviously stretching it a bit. Either way, that's plenty of
>>> options
>>> for my bits to travel without needing a lot of handovers. Why? Easy:
>>> If
>>> your nearest teleport is close by, then the set of satellites that
>>> the
>>> teleport can see and the set that you can see is almost the same, so
>>> you
>>> can essentially stick with the same satellite while it's in view for
>>> you
>>> because it'll also be in view for the teleport. Pretty much any bird
>>> above you will do.
>>>
>>> And because I don't get a lot of competition from other users in my
>>> area
>>> vying for one of the few available satellites that can see both us
>>> and
>>> the teleport, this is about as good as it gets at 37S latitude. If
>>> I'd
>>> want it any better, I'd have to move a lot further south.
>>>
>>> It'd be interesting to hear from Jonathan what the availability of
>>> home
>>> broadband is like in the Dallas area. I note that it's at a lower
>>> latitude (33N) than Auckland, but the difference isn't huge. I notice
>>> two teleports each about 160 km away, which is also not too bad. I
>>> also
>>> note Starlink availability in the area is restricted at the moment -
>>> oversubscribed? But if Jonathan gets good data rates, then that means
>>> that competition for bird capacity can't be too bad - for whatever
>>> reason.
>>
>> I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
>> In cities, like Dallas, and Lawton where I live, there are good
>> broadband options. But there are also many people that live outside
>> cities, and the options are much worse. The low density userbase in
>> rural Oklahoma and Texas is probably ideal conditions for Starlink.
>>>
>>>
>>> Caveat 3: Backhaul. There isn't just one queue between me and
>>> whatever I
>>> talk to in terms of my communications. Traceroute shows about 10 hops
>>> between me and the University of Auckland via Starlink. That's 10
>>> queues, not one. Many of them will have cross traffic. So it's a bit
>>> hard to tell where our packets really get to wait or where they get
>>> dropped. The insidious bit here is that a lot of them will be between
>>> 1
>>> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
>>> turn into bottlenecks. This isn't like a narrowband GEO link of a few
>>> Mb/s where it's obvious where the dominant long latency bottleneck in
>>> your TCP connection's path is. Read: It's pretty hard to tell whether
>>> a
>>> drop in "speed" is due to a performance issue in the Starlink system
>>> or
>>> somewhere between Starlink's systems and the target system.
>>>
>>> I see RTTs here between 20 ms and 250 ms, where the physical latency
>>> should be under 15 ms. So there's clearly a bit of buffer here along
>>> the
>>> chain that occasionally fills up.
>>>
>>> Caveat 4: Handovers. Handover between birds and teleports is
>>> inevitably
>>> associated with a change in RTT and in most cases also available
>>> bandwidth. Plus your packets now arrive at a new queue on a new
>>> satellite while your TCP is still trying to respond to whatever it
>>> thought the queue on the previous bird was doing. Read: Whatever your
>>> cwnd is immediately after a handover, it's probably not what it
>>> should be.
>>>
>>> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>> speedtest.net tests each to five local servers. Average upload rate
>>> was
>>> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
>>> Starlink seems to be buying its local connectivity from (Vocus Group)
>>> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>>> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>> observed. In fact, they were the ISP with consistently the worst
>>> rates.
>>>
>>> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>>> with rates. Average RTT was the same as for Vocus.
>>>
>>> Note the variation though: More or less a factor of two between
>>> highest
>>> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>> tests? Or is there something systematic behind this? Way too few
>>> tests
>>> to tell.
>>>
>>> What these tests do is establish a ballpark.
>>>
>>> I'm currently repeating tests with dish placed on a trestle closer to
>>> the heavens. This seems to have translated into fewer outages / ping
>>> losses (around 1/4 of what I had yesterday with dishy on the ground
>>> on
>>> my deck). Still good enough for a lengthy video Skype call with my
>>> folks
>>> in Germany, although they did comment about reduced video quality.
>>> But
>>> maybe that was the lighting or the different background as I wasn't
>>> in
>>> my usual spot with my laptop when I called them.
>>
>> Clear view of the sky is king for Starlink reliability. I've got my
>> dishy mounted on the back fence, looking up over an empty field, so
>> it's pretty much best-case scenario here.
>>>
>>>
>>> --
>>>
>>> ****************************************************************
>>> Dr. Ulrich Speidel
>>>
>>> School of Computer Science
>>>
>>> Room 303S.594 (City Campus)
>>>
>>> The University of Auckland
>>> u.speidel@auckland.ac.nz
>>> http://www.cs.auckland.ac.nz/~ulrich/
>>> ****************************************************************
>>>
>>>
>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 18:09 ` Nathan Owens
@ 2023-01-13 20:30 ` Nathan Owens
2023-01-13 20:37 ` Dave Taht
2023-01-13 20:49 ` Dave Taht
0 siblings, 2 replies; 49+ messages in thread
From: Nathan Owens @ 2023-01-13 20:30 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Dave Taht, starlink
[-- Attachment #1.1.1: Type: text/plain, Size: 10110 bytes --]
Here's the data visualization for Johnathan's Data
[image: Screenshot 2023-01-13 at 12.29.15 PM.png]
You can see the path change at :12, :27, :42, :57 after the minute. Some
paths are clearly busier than others with increased loss, latency, and
jitter.
On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
> I’ll run my visualization code on this result this afternoon and report
> back!
>
> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> The irtt command, run with normal, light usage:
>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>
>> Jonathan Bennett
>> Hackaday.com
>>
>>
>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> packet caps would be nice... all this is very exciting news.
>>>
>>> I'd so love for one or more of y'all reporting such great uplink
>>> results nowadays to duplicate and re-plot the original irtt tests we
>>> did:
>>>
>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>> whatever.json
>>>
>>> They MUST have changed their scheduling to get such amazing uplink
>>> results, in addition to better queue management.
>>>
>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>> newark, atlanta, singapore, mumbai)
>>>
>>> There's an R and gnuplot script for plotting that output around here
>>> somewhere (I have largely personally put down the starlink project,
>>> loaning out mine) - that went by on this list... I should have written
>>> a blog entry so I can find that stuff again.
>>>
>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>> <starlink@lists.bufferbloat.net> wrote:
>>> >
>>> >
>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>> >>
>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>> >> >
>>> >> > From Auckland, New Zealand, using a roaming subscription, it puts me
>>> >> > in touch with a server 2000 km away. OK then:
>>> >> >
>>> >> >
>>> >> > IP address: nix six.
>>> >> >
>>> >> > My thoughts shall follow later.
>>> >>
>>> >> OK, so here we go.
>>> >>
>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>> really
>>> >> laden with so many caveats that it's not funny. I took our new work
>>> >> Starlink kit home in December to give it a try and the other day
>>> finally
>>> >> got around to set it up. It's on a roaming subscription because our
>>> >> badly built-up campus really isn't ideal in terms of a clear view of
>>> the
>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet adapter,
>>> >> not the WiFi?
>>> >>
>>> >> Caveat 1: Location, location. I live in a place where the best
>>> Starlink
>>> >> promises is about 1/3 in terms of data rate you can actually get from
>>> >> fibre to the home at under half of Starlink's price. Read: There are
>>> few
>>> >> Starlink users around. I might be the only one in my suburb.
>>> >>
>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>> (which
>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>> Puwera,
>>> >> the most distant of the three and about 130 km away from me as the
>>> crow
>>> >> flies. Read: My dishy can use any satellite that any of these three
>>> can
>>> >> see, and then depending on where I put it and how much of the southern
>>> >> sky it can see, maybe also the one in Hinds, 840 km away, although
>>> that
>>> >> is obviously stretching it a bit. Either way, that's plenty of options
>>> >> for my bits to travel without needing a lot of handovers. Why? Easy:
>>> If
>>> >> your nearest teleport is close by, then the set of satellites that the
>>> >> teleport can see and the set that you can see is almost the same, so
>>> you
>>> >> can essentially stick with the same satellite while it's in view for
>>> you
>>> >> because it'll also be in view for the teleport. Pretty much any bird
>>> >> above you will do.
>>> >>
>>> >> And because I don't get a lot of competition from other users in my
>>> area
>>> >> vying for one of the few available satellites that can see both us and
>>> >> the teleport, this is about as good as it gets at 37S latitude. If I'd
>>> >> want it any better, I'd have to move a lot further south.
>>> >>
>>> >> It'd be interesting to hear from Jonathan what the availability of
>>> home
>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>> >> latitude (33N) than Auckland, but the difference isn't huge. I notice
>>> >> two teleports each about 160 km away, which is also not too bad. I
>>> also
>>> >> note Starlink availability in the area is restricted at the moment -
>>> >> oversubscribed? But if Jonathan gets good data rates, then that means
>>> >> that competition for bird capacity can't be too bad - for whatever
>>> reason.
>>> >
>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
>>> In cities, like Dallas, and Lawton where I live, there are good broadband
>>> options. But there are also many people that live outside cities, and the
>>> options are much worse. The low density userbase in rural Oklahoma and
>>> Texas is probably ideal conditions for Starlink.
>>> >>
>>> >>
>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>> whatever I
>>> >> talk to in terms of my communications. Traceroute shows about 10 hops
>>> >> between me and the University of Auckland via Starlink. That's 10
>>> >> queues, not one. Many of them will have cross traffic. So it's a bit
>>> >> hard to tell where our packets really get to wait or where they get
>>> >> dropped. The insidious bit here is that a lot of them will be between
>>> 1
>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a few
>>> >> Mb/s where it's obvious where the dominant long latency bottleneck in
>>> >> your TCP connection's path is. Read: It's pretty hard to tell whether
>>> a
>>> >> drop in "speed" is due to a performance issue in the Starlink system
>>> or
>>> >> somewhere between Starlink's systems and the target system.
>>> >>
>>> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
>>> >> should be under 15 ms. So there's clearly a bit of buffer here along
>>> the
>>> >> chain that occasionally fills up.
>>> >>
>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>> inevitably
>>> >> associated with a change in RTT and in most cases also available
>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>> >> satellite while your TCP is still trying to respond to whatever it
>>> >> thought the queue on the previous bird was doing. Read: Whatever your
>>> >> cwnd is immediately after a handover, it's probably not what it
>>> should be.
>>> >>
>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>> >> speedtest.net tests each to five local servers. Average upload rate
>>> was
>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
>>> >> Starlink seems to be buying its local connectivity from (Vocus Group)
>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>> >> observed. In fact, they were the ISP with consistently the worst
>>> rates.
>>> >>
>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>>> >> with rates. Average RTT was the same as for Vocus.
>>> >>
>>> >> Note the variation though: More or less a factor of two between
>>> highest
>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>> >> tests? Or is there something systematic behind this? Way too few tests
>>> >> to tell.
>>> >>
>>> >> What these tests do is establish a ballpark.
>>> >>
>>> >> I'm currently repeating tests with dish placed on a trestle closer to
>>> >> the heavens. This seems to have translated into fewer outages / ping
>>> >> losses (around 1/4 of what I had yesterday with dishy on the ground on
>>> >> my deck). Still good enough for a lengthy video Skype call with my
>>> folks
>>> >> in Germany, although they did comment about reduced video quality. But
>>> >> maybe that was the lighting or the different background as I wasn't in
>>> >> my usual spot with my laptop when I called them.
>>> >
>>> > Clear view of the sky is king for Starlink reliability. I've got my
>>> dishy mounted on the back fence, looking up over an empty field, so it's
>>> pretty much best-case scenario here.
>>> >>
>>> >>
>>> >> --
>>> >>
>>> >> ****************************************************************
>>> >> Dr. Ulrich Speidel
>>> >>
>>> >> School of Computer Science
>>> >>
>>> >> Room 303S.594 (City Campus)
>>> >>
>>> >> The University of Auckland
>>> >> u.speidel@auckland.ac.nz
>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>> >> ****************************************************************
>>> >>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Starlink mailing list
>>> >> Starlink@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>> >
>>> > _______________________________________________
>>> > Starlink mailing list
>>> > Starlink@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>>
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>>
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> Dave Täht CEO, TekLibre, LLC
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
[-- Attachment #1.1.2: Type: text/html, Size: 13177 bytes --]
[-- Attachment #1.2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #2: jp_starlink_irtt.pdf --]
[-- Type: application/pdf, Size: 697557 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 20:30 ` Nathan Owens
@ 2023-01-13 20:37 ` Dave Taht
2023-01-13 21:24 ` Nathan Owens
2023-01-13 20:49 ` Dave Taht
1 sibling, 1 reply; 49+ messages in thread
From: Dave Taht @ 2023-01-13 20:37 UTC (permalink / raw)
To: Nathan Owens; +Cc: Jonathan Bennett, starlink
[-- Attachment #1.1: Type: text/plain, Size: 11256 bytes --]
That is amazingly better than what I'd seen before.
(can you repost your script? What's the difference between the red and blue
dots?)
Jonathan (or all? ulrich?), could you fire up the irtt for that long
interval, and fire off a waveform test, pause, then a cloudflare test,
then, if you have it?
flent -H oneofmyclosestservers.starlink.taht.net -t starlink_vs_irtt
--step-size=.05 --socket-stats --test-parameter=upload_streams=4 tcp_nup
Going the full monty and enabling ecn and taking a packet cap of this last
would additionally make my day...
On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
> Here's the data visualization for Johnathan's Data
>
> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>
> You can see the path change at :12, :27, :42, :57 after the minute. Some
> paths are clearly busier than others with increased loss, latency, and
> jitter.
>
> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
>
>> I’ll run my visualization code on this result this afternoon and report
>> back!
>>
>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The irtt command, run with normal, light usage:
>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>
>>> Jonathan Bennett
>>> Hackaday.com
>>>
>>>
>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>> packet caps would be nice... all this is very exciting news.
>>>>
>>>> I'd so love for one or more of y'all reporting such great uplink
>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>> did:
>>>>
>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>> whatever.json
>>>>
>>>> They MUST have changed their scheduling to get such amazing uplink
>>>> results, in addition to better queue management.
>>>>
>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>> newark, atlanta, singapore, mumbai)
>>>>
>>>> There's an R and gnuplot script for plotting that output around here
>>>> somewhere (I have largely personally put down the starlink project,
>>>> loaning out mine) - that went by on this list... I should have written
>>>> a blog entry so I can find that stuff again.
>>>>
>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>> <starlink@lists.bufferbloat.net> wrote:
>>>> >
>>>> >
>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>> >>
>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>> >> >
>>>> >> > From Auckland, New Zealand, using a roaming subscription, it puts
>>>> me
>>>> >> > in touch with a server 2000 km away. OK then:
>>>> >> >
>>>> >> >
>>>> >> > IP address: nix six.
>>>> >> >
>>>> >> > My thoughts shall follow later.
>>>> >>
>>>> >> OK, so here we go.
>>>> >>
>>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>>> really
>>>> >> laden with so many caveats that it's not funny. I took our new work
>>>> >> Starlink kit home in December to give it a try and the other day
>>>> finally
>>>> >> got around to set it up. It's on a roaming subscription because our
>>>> >> badly built-up campus really isn't ideal in terms of a clear view of
>>>> the
>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>> adapter,
>>>> >> not the WiFi?
>>>> >>
>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>> Starlink
>>>> >> promises is about 1/3 in terms of data rate you can actually get from
>>>> >> fibre to the home at under half of Starlink's price. Read: There are
>>>> few
>>>> >> Starlink users around. I might be the only one in my suburb.
>>>> >>
>>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>>> (which
>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>> Puwera,
>>>> >> the most distant of the three and about 130 km away from me as the
>>>> crow
>>>> >> flies. Read: My dishy can use any satellite that any of these three
>>>> can
>>>> >> see, and then depending on where I put it and how much of the
>>>> southern
>>>> >> sky it can see, maybe also the one in Hinds, 840 km away, although
>>>> that
>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>> options
>>>> >> for my bits to travel without needing a lot of handovers. Why? Easy:
>>>> If
>>>> >> your nearest teleport is close by, then the set of satellites that
>>>> the
>>>> >> teleport can see and the set that you can see is almost the same, so
>>>> you
>>>> >> can essentially stick with the same satellite while it's in view for
>>>> you
>>>> >> because it'll also be in view for the teleport. Pretty much any bird
>>>> >> above you will do.
>>>> >>
>>>> >> And because I don't get a lot of competition from other users in my
>>>> area
>>>> >> vying for one of the few available satellites that can see both us
>>>> and
>>>> >> the teleport, this is about as good as it gets at 37S latitude. If
>>>> I'd
>>>> >> want it any better, I'd have to move a lot further south.
>>>> >>
>>>> >> It'd be interesting to hear from Jonathan what the availability of
>>>> home
>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I notice
>>>> >> two teleports each about 160 km away, which is also not too bad. I
>>>> also
>>>> >> note Starlink availability in the area is restricted at the moment -
>>>> >> oversubscribed? But if Jonathan gets good data rates, then that means
>>>> >> that competition for bird capacity can't be too bad - for whatever
>>>> reason.
>>>> >
>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
>>>> In cities, like Dallas, and Lawton where I live, there are good broadband
>>>> options. But there are also many people that live outside cities, and the
>>>> options are much worse. The low density userbase in rural Oklahoma and
>>>> Texas is probably ideal conditions for Starlink.
>>>> >>
>>>> >>
>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>> whatever I
>>>> >> talk to in terms of my communications. Traceroute shows about 10 hops
>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>> >> queues, not one. Many of them will have cross traffic. So it's a bit
>>>> >> hard to tell where our packets really get to wait or where they get
>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>> between 1
>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a few
>>>> >> Mb/s where it's obvious where the dominant long latency bottleneck in
>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>> whether a
>>>> >> drop in "speed" is due to a performance issue in the Starlink system
>>>> or
>>>> >> somewhere between Starlink's systems and the target system.
>>>> >>
>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
>>>> >> should be under 15 ms. So there's clearly a bit of buffer here along
>>>> the
>>>> >> chain that occasionally fills up.
>>>> >>
>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>> inevitably
>>>> >> associated with a change in RTT and in most cases also available
>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>> >> satellite while your TCP is still trying to respond to whatever it
>>>> >> thought the queue on the previous bird was doing. Read: Whatever your
>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>> should be.
>>>> >>
>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>>> >> speedtest.net tests each to five local servers. Average upload rate
>>>> was
>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
>>>> >> Starlink seems to be buying its local connectivity from (Vocus Group)
>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>> rates.
>>>> >>
>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>>>> >> with rates. Average RTT was the same as for Vocus.
>>>> >>
>>>> >> Note the variation though: More or less a factor of two between
>>>> highest
>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>>> >> tests? Or is there something systematic behind this? Way too few
>>>> tests
>>>> >> to tell.
>>>> >>
>>>> >> What these tests do is establish a ballpark.
>>>> >>
>>>> >> I'm currently repeating tests with dish placed on a trestle closer to
>>>> >> the heavens. This seems to have translated into fewer outages / ping
>>>> >> losses (around 1/4 of what I had yesterday with dishy on the ground
>>>> on
>>>> >> my deck). Still good enough for a lengthy video Skype call with my
>>>> folks
>>>> >> in Germany, although they did comment about reduced video quality.
>>>> But
>>>> >> maybe that was the lighting or the different background as I wasn't
>>>> in
>>>> >> my usual spot with my laptop when I called them.
>>>> >
>>>> > Clear view of the sky is king for Starlink reliability. I've got my
>>>> dishy mounted on the back fence, looking up over an empty field, so it's
>>>> pretty much best-case scenario here.
>>>> >>
>>>> >>
>>>> >> --
>>>> >>
>>>> >> ****************************************************************
>>>> >> Dr. Ulrich Speidel
>>>> >>
>>>> >> School of Computer Science
>>>> >>
>>>> >> Room 303S.594 (City Campus)
>>>> >>
>>>> >> The University of Auckland
>>>> >> u.speidel@auckland.ac.nz
>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>> >> ****************************************************************
>>>> >>
>>>> >>
>>>> >>
>>>> >> _______________________________________________
>>>> >> Starlink mailing list
>>>> >> Starlink@lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>> >
>>>> > _______________________________________________
>>>> > Starlink mailing list
>>>> > Starlink@lists.bufferbloat.net
>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>>
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> Dave Täht CEO, TekLibre, LLC
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 14708 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 20:25 ` Pat Jensen
@ 2023-01-13 20:40 ` Dave Taht
0 siblings, 0 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-13 20:40 UTC (permalink / raw)
To: Pat Jensen; +Cc: Jonathan Bennett, starlink
On Fri, Jan 13, 2023 at 12:26 PM Pat Jensen <patj@jensencloud.net> wrote:
>
> Dave,
>
> From Central California (using Starlink Los Angeles POP w/ live IPv6)
> via fremont.starlink.taht.net at 12:20PM PST non-peak
> Wired via 1000baseT with no other users on the network
>
> Min Mean Median Max Stddev
> --- ---- ------ --- ------
> RTT 24.5ms 51.22ms 48.75ms 262.5ms 14.92ms
> send delay 76.44ms 94.79ms 92.69ms 288.7ms 12.06ms
> receive delay -54.57ms -43.58ms -44.87ms 7.67ms 6.21ms
>
> IPDV (jitter) 92.5µs 4.39ms 2.98ms 80.71ms 4.26ms
> send IPDV 2.06µs 3.85ms 2.99ms 80.76ms 3.25ms
> receive IPDV 0s 1.5ms 36.5µs 49.72ms 3.18ms
>
> send call time 6.46µs 34.9µs 1.32ms 15.9µs
> timer error 0s 72.1µs 4.97ms 69.2µs
> server proc. time 620ns 4.19µs 647µs 6.34µs
Thank you pat. In general, I am almost never interested in summary
statistics like these, but in plotting the long term behaviors,
as nathan just did, starting with idle baseline, and then adding
various loads. Could you slide him the full output of this test, since
he's got the magic plotting script (my json-fu is non-existent)
And then, same test, against the waveform, cloudflare, and flent loads?
> duration: 5m1s (wait 787.6ms)
> packets sent/received: 99879/83786 (16.11% loss)
> server packets received: 83897/99879 (16.00%/0.13% loss up/down)
> late (out-of-order) pkts: 1 (0.00%)
> bytes sent/received: 5992740/5027160
> send/receive rate: 159.8 Kbps / 134.1 Kbps
> packet length: 60 bytes
> timer stats: 121/100000 (0.12%) missed, 2.40% error
>
> patj@air-2 ~ % curl ipinfo.io
> {
> "ip": "98.97.140.145",
> "hostname": "customer.lsancax1.pop.starlinkisp.net",
> "city": "Los Angeles",
> "region": "California",
> "country": "US",
> "loc": "34.0522,-118.2437",
> "org": "AS14593 Space Exploration Technologies Corporation",
> "postal": "90009",
> "timezone": "America/Los_Angeles",
> "readme": "https://ipinfo.io/missingauth"
> }%
>
> Pat
>
> On 2023-01-13 09:26, Dave Taht via Starlink wrote:
> > packet caps would be nice... all this is very exciting news.
> >
> > I'd so love for one or more of y'all reporting such great uplink
> > results nowadays to duplicate and re-plot the original irtt tests we
> > did:
> >
> > irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
> > whatever.json
> >
> > They MUST have changed their scheduling to get such amazing uplink
> > results, in addition to better queue management.
> >
> > (for the record, my servers are de, london, fremont, sydney, dallas,
> > newark, atlanta, singapore, mumbai)
> >
> > There's an R and gnuplot script for plotting that output around here
> > somewhere (I have largely personally put down the starlink project,
> > loaning out mine) - that went by on this list... I should have written
> > a blog entry so I can find that stuff again.
> >
> > On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
> > <starlink@lists.bufferbloat.net> wrote:
> >>
> >>
> >> On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink
> >> <starlink@lists.bufferbloat.net> wrote:
> >>>
> >>> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
> >>> >
> >>> > From Auckland, New Zealand, using a roaming subscription, it puts me
> >>> > in touch with a server 2000 km away. OK then:
> >>> >
> >>> >
> >>> > IP address: nix six.
> >>> >
> >>> > My thoughts shall follow later.
> >>>
> >>> OK, so here we go.
> >>>
> >>> I'm always a bit skeptical when it comes to speed tests - they're
> >>> really
> >>> laden with so many caveats that it's not funny. I took our new work
> >>> Starlink kit home in December to give it a try and the other day
> >>> finally
> >>> got around to set it up. It's on a roaming subscription because our
> >>> badly built-up campus really isn't ideal in terms of a clear view of
> >>> the
> >>> sky. Oh - and did I mention that I used the Starlink Ethernet
> >>> adapter,
> >>> not the WiFi?
> >>>
> >>> Caveat 1: Location, location. I live in a place where the best
> >>> Starlink
> >>> promises is about 1/3 in terms of data rate you can actually get from
> >>> fibre to the home at under half of Starlink's price. Read: There are
> >>> few
> >>> Starlink users around. I might be the only one in my suburb.
> >>>
> >>> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
> >>> (which
> >>> is at a stretch daytrip cycling distance from here), Te Hana and
> >>> Puwera,
> >>> the most distant of the three and about 130 km away from me as the
> >>> crow
> >>> flies. Read: My dishy can use any satellite that any of these three
> >>> can
> >>> see, and then depending on where I put it and how much of the
> >>> southern
> >>> sky it can see, maybe also the one in Hinds, 840 km away, although
> >>> that
> >>> is obviously stretching it a bit. Either way, that's plenty of
> >>> options
> >>> for my bits to travel without needing a lot of handovers. Why? Easy:
> >>> If
> >>> your nearest teleport is close by, then the set of satellites that
> >>> the
> >>> teleport can see and the set that you can see is almost the same, so
> >>> you
> >>> can essentially stick with the same satellite while it's in view for
> >>> you
> >>> because it'll also be in view for the teleport. Pretty much any bird
> >>> above you will do.
> >>>
> >>> And because I don't get a lot of competition from other users in my
> >>> area
> >>> vying for one of the few available satellites that can see both us
> >>> and
> >>> the teleport, this is about as good as it gets at 37S latitude. If
> >>> I'd
> >>> want it any better, I'd have to move a lot further south.
> >>>
> >>> It'd be interesting to hear from Jonathan what the availability of
> >>> home
> >>> broadband is like in the Dallas area. I note that it's at a lower
> >>> latitude (33N) than Auckland, but the difference isn't huge. I notice
> >>> two teleports each about 160 km away, which is also not too bad. I
> >>> also
> >>> note Starlink availability in the area is restricted at the moment -
> >>> oversubscribed? But if Jonathan gets good data rates, then that means
> >>> that competition for bird capacity can't be too bad - for whatever
> >>> reason.
> >>
> >> I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
> >> In cities, like Dallas, and Lawton where I live, there are good
> >> broadband options. But there are also many people that live outside
> >> cities, and the options are much worse. The low density userbase in
> >> rural Oklahoma and Texas is probably ideal conditions for Starlink.
> >>>
> >>>
> >>> Caveat 3: Backhaul. There isn't just one queue between me and
> >>> whatever I
> >>> talk to in terms of my communications. Traceroute shows about 10 hops
> >>> between me and the University of Auckland via Starlink. That's 10
> >>> queues, not one. Many of them will have cross traffic. So it's a bit
> >>> hard to tell where our packets really get to wait or where they get
> >>> dropped. The insidious bit here is that a lot of them will be between
> >>> 1
> >>> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
> >>> turn into bottlenecks. This isn't like a narrowband GEO link of a few
> >>> Mb/s where it's obvious where the dominant long latency bottleneck in
> >>> your TCP connection's path is. Read: It's pretty hard to tell whether
> >>> a
> >>> drop in "speed" is due to a performance issue in the Starlink system
> >>> or
> >>> somewhere between Starlink's systems and the target system.
> >>>
> >>> I see RTTs here between 20 ms and 250 ms, where the physical latency
> >>> should be under 15 ms. So there's clearly a bit of buffer here along
> >>> the
> >>> chain that occasionally fills up.
> >>>
> >>> Caveat 4: Handovers. Handover between birds and teleports is
> >>> inevitably
> >>> associated with a change in RTT and in most cases also available
> >>> bandwidth. Plus your packets now arrive at a new queue on a new
> >>> satellite while your TCP is still trying to respond to whatever it
> >>> thought the queue on the previous bird was doing. Read: Whatever your
> >>> cwnd is immediately after a handover, it's probably not what it
> >>> should be.
> >>>
> >>> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
> >>> speedtest.net tests each to five local servers. Average upload rate
> >>> was
> >>> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
> >>> Starlink seems to be buying its local connectivity from (Vocus Group)
> >>> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
> >>> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
> >>> observed. In fact, they were the ISP with consistently the worst
> >>> rates.
> >>>
> >>> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
> >>> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
> >>> with rates. Average RTT was the same as for Vocus.
> >>>
> >>> Note the variation though: More or less a factor of two between
> >>> highest
> >>> and lowest rates for each ISP. Did MyRepublic just get lucky in my
> >>> tests? Or is there something systematic behind this? Way too few
> >>> tests
> >>> to tell.
> >>>
> >>> What these tests do is establish a ballpark.
> >>>
> >>> I'm currently repeating tests with dish placed on a trestle closer to
> >>> the heavens. This seems to have translated into fewer outages / ping
> >>> losses (around 1/4 of what I had yesterday with dishy on the ground
> >>> on
> >>> my deck). Still good enough for a lengthy video Skype call with my
> >>> folks
> >>> in Germany, although they did comment about reduced video quality.
> >>> But
> >>> maybe that was the lighting or the different background as I wasn't
> >>> in
> >>> my usual spot with my laptop when I called them.
> >>
> >> Clear view of the sky is king for Starlink reliability. I've got my
> >> dishy mounted on the back fence, looking up over an empty field, so
> >> it's pretty much best-case scenario here.
> >>>
> >>>
> >>> --
> >>>
> >>> ****************************************************************
> >>> Dr. Ulrich Speidel
> >>>
> >>> School of Computer Science
> >>>
> >>> Room 303S.594 (City Campus)
> >>>
> >>> The University of Auckland
> >>> u.speidel@auckland.ac.nz
> >>> http://www.cs.auckland.ac.nz/~ulrich/
> >>> ****************************************************************
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 20:30 ` Nathan Owens
2023-01-13 20:37 ` Dave Taht
@ 2023-01-13 20:49 ` Dave Taht
2023-01-13 21:25 ` Luis A. Cornejo
1 sibling, 1 reply; 49+ messages in thread
From: Dave Taht @ 2023-01-13 20:49 UTC (permalink / raw)
To: Nathan Owens; +Cc: Jonathan Bennett, starlink
[-- Attachment #1.1: Type: text/plain, Size: 11254 bytes --]
On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
> Here's the data visualization for Johnathan's Data
>
> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>
> You can see the path change at :12, :27, :42, :57 after the minute. Some
> paths are clearly busier than others with increased loss, latency, and
> jitter.
>
I am so glad to see loss and bounded delay here. Also a bit of rigor
regarding what traffic was active locally vs on the path would be nice,
although it seems to line up with the known 15s starlink switchover thing
(need a name for this), in this case, doing a few speedtests
while that irtt is running would show the impact(s) of whatever else they
are up to.
It would also be my hope that the loss distribution in the middle portion
of this data is good, not bursty, but we don't have a tool to take apart
that. (I am so hopeless at json)
>
>
> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
>
>> I’ll run my visualization code on this result this afternoon and report
>> back!
>>
>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The irtt command, run with normal, light usage:
>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>
>>> Jonathan Bennett
>>> Hackaday.com
>>>
>>>
>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>> packet caps would be nice... all this is very exciting news.
>>>>
>>>> I'd so love for one or more of y'all reporting such great uplink
>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>> did:
>>>>
>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>> whatever.json
>>>>
>>>> They MUST have changed their scheduling to get such amazing uplink
>>>> results, in addition to better queue management.
>>>>
>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>> newark, atlanta, singapore, mumbai)
>>>>
>>>> There's an R and gnuplot script for plotting that output around here
>>>> somewhere (I have largely personally put down the starlink project,
>>>> loaning out mine) - that went by on this list... I should have written
>>>> a blog entry so I can find that stuff again.
>>>>
>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>> <starlink@lists.bufferbloat.net> wrote:
>>>> >
>>>> >
>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>> >>
>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>> >> >
>>>> >> > From Auckland, New Zealand, using a roaming subscription, it puts
>>>> me
>>>> >> > in touch with a server 2000 km away. OK then:
>>>> >> >
>>>> >> >
>>>> >> > IP address: nix six.
>>>> >> >
>>>> >> > My thoughts shall follow later.
>>>> >>
>>>> >> OK, so here we go.
>>>> >>
>>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>>> really
>>>> >> laden with so many caveats that it's not funny. I took our new work
>>>> >> Starlink kit home in December to give it a try and the other day
>>>> finally
>>>> >> got around to set it up. It's on a roaming subscription because our
>>>> >> badly built-up campus really isn't ideal in terms of a clear view of
>>>> the
>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>> adapter,
>>>> >> not the WiFi?
>>>> >>
>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>> Starlink
>>>> >> promises is about 1/3 in terms of data rate you can actually get from
>>>> >> fibre to the home at under half of Starlink's price. Read: There are
>>>> few
>>>> >> Starlink users around. I might be the only one in my suburb.
>>>> >>
>>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>>> (which
>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>> Puwera,
>>>> >> the most distant of the three and about 130 km away from me as the
>>>> crow
>>>> >> flies. Read: My dishy can use any satellite that any of these three
>>>> can
>>>> >> see, and then depending on where I put it and how much of the
>>>> southern
>>>> >> sky it can see, maybe also the one in Hinds, 840 km away, although
>>>> that
>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>> options
>>>> >> for my bits to travel without needing a lot of handovers. Why? Easy:
>>>> If
>>>> >> your nearest teleport is close by, then the set of satellites that
>>>> the
>>>> >> teleport can see and the set that you can see is almost the same, so
>>>> you
>>>> >> can essentially stick with the same satellite while it's in view for
>>>> you
>>>> >> because it'll also be in view for the teleport. Pretty much any bird
>>>> >> above you will do.
>>>> >>
>>>> >> And because I don't get a lot of competition from other users in my
>>>> area
>>>> >> vying for one of the few available satellites that can see both us
>>>> and
>>>> >> the teleport, this is about as good as it gets at 37S latitude. If
>>>> I'd
>>>> >> want it any better, I'd have to move a lot further south.
>>>> >>
>>>> >> It'd be interesting to hear from Jonathan what the availability of
>>>> home
>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I notice
>>>> >> two teleports each about 160 km away, which is also not too bad. I
>>>> also
>>>> >> note Starlink availability in the area is restricted at the moment -
>>>> >> oversubscribed? But if Jonathan gets good data rates, then that means
>>>> >> that competition for bird capacity can't be too bad - for whatever
>>>> reason.
>>>> >
>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
>>>> In cities, like Dallas, and Lawton where I live, there are good broadband
>>>> options. But there are also many people that live outside cities, and the
>>>> options are much worse. The low density userbase in rural Oklahoma and
>>>> Texas is probably ideal conditions for Starlink.
>>>> >>
>>>> >>
>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>> whatever I
>>>> >> talk to in terms of my communications. Traceroute shows about 10 hops
>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>> >> queues, not one. Many of them will have cross traffic. So it's a bit
>>>> >> hard to tell where our packets really get to wait or where they get
>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>> between 1
>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can all
>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a few
>>>> >> Mb/s where it's obvious where the dominant long latency bottleneck in
>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>> whether a
>>>> >> drop in "speed" is due to a performance issue in the Starlink system
>>>> or
>>>> >> somewhere between Starlink's systems and the target system.
>>>> >>
>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
>>>> >> should be under 15 ms. So there's clearly a bit of buffer here along
>>>> the
>>>> >> chain that occasionally fills up.
>>>> >>
>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>> inevitably
>>>> >> associated with a change in RTT and in most cases also available
>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>> >> satellite while your TCP is still trying to respond to whatever it
>>>> >> thought the queue on the previous bird was doing. Read: Whatever your
>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>> should be.
>>>> >>
>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>>> >> speedtest.net tests each to five local servers. Average upload rate
>>>> was
>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP that
>>>> >> Starlink seems to be buying its local connectivity from (Vocus Group)
>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>> rates.
>>>> >>
>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>>>> >> with rates. Average RTT was the same as for Vocus.
>>>> >>
>>>> >> Note the variation though: More or less a factor of two between
>>>> highest
>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>>> >> tests? Or is there something systematic behind this? Way too few
>>>> tests
>>>> >> to tell.
>>>> >>
>>>> >> What these tests do is establish a ballpark.
>>>> >>
>>>> >> I'm currently repeating tests with dish placed on a trestle closer to
>>>> >> the heavens. This seems to have translated into fewer outages / ping
>>>> >> losses (around 1/4 of what I had yesterday with dishy on the ground
>>>> on
>>>> >> my deck). Still good enough for a lengthy video Skype call with my
>>>> folks
>>>> >> in Germany, although they did comment about reduced video quality.
>>>> But
>>>> >> maybe that was the lighting or the different background as I wasn't
>>>> in
>>>> >> my usual spot with my laptop when I called them.
>>>> >
>>>> > Clear view of the sky is king for Starlink reliability. I've got my
>>>> dishy mounted on the back fence, looking up over an empty field, so it's
>>>> pretty much best-case scenario here.
>>>> >>
>>>> >>
>>>> >> --
>>>> >>
>>>> >> ****************************************************************
>>>> >> Dr. Ulrich Speidel
>>>> >>
>>>> >> School of Computer Science
>>>> >>
>>>> >> Room 303S.594 (City Campus)
>>>> >>
>>>> >> The University of Auckland
>>>> >> u.speidel@auckland.ac.nz
>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>> >> ****************************************************************
>>>> >>
>>>> >>
>>>> >>
>>>> >> _______________________________________________
>>>> >> Starlink mailing list
>>>> >> Starlink@lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>> >
>>>> > _______________________________________________
>>>> > Starlink mailing list
>>>> > Starlink@lists.bufferbloat.net
>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>>
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> Dave Täht CEO, TekLibre, LLC
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 14812 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 20:37 ` Dave Taht
@ 2023-01-13 21:24 ` Nathan Owens
0 siblings, 0 replies; 49+ messages in thread
From: Nathan Owens @ 2023-01-13 21:24 UTC (permalink / raw)
To: Dave Taht; +Cc: Jonathan Bennett, starlink
[-- Attachment #1.1: Type: text/plain, Size: 11756 bytes --]
They are just alternating colors to make it look pretty.
On Fri, Jan 13, 2023 at 12:37 PM Dave Taht <dave.taht@gmail.com> wrote:
> That is amazingly better than what I'd seen before.
>
> (can you repost your script? What's the difference between the red and
> blue dots?)
>
> Jonathan (or all? ulrich?), could you fire up the irtt for that long
> interval, and fire off a waveform test, pause, then a cloudflare test,
> then, if you have it?
>
> flent -H oneofmyclosestservers.starlink.taht.net -t starlink_vs_irtt
> --step-size=.05 --socket-stats --test-parameter=upload_streams=4 tcp_nup
>
> Going the full monty and enabling ecn and taking a packet cap of this last
> would additionally make my day...
>
> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>
>> Here's the data visualization for Johnathan's Data
>>
>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>
>> You can see the path change at :12, :27, :42, :57 after the minute. Some
>> paths are clearly busier than others with increased loss, latency, and
>> jitter.
>>
>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
>>
>>> I’ll run my visualization code on this result this afternoon and report
>>> back!
>>>
>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> The irtt command, run with normal, light usage:
>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>
>>>> Jonathan Bennett
>>>> Hackaday.com
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
>>>>
>>>>> packet caps would be nice... all this is very exciting news.
>>>>>
>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>> did:
>>>>>
>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>> whatever.json
>>>>>
>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>> results, in addition to better queue management.
>>>>>
>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>> newark, atlanta, singapore, mumbai)
>>>>>
>>>>> There's an R and gnuplot script for plotting that output around here
>>>>> somewhere (I have largely personally put down the starlink project,
>>>>> loaning out mine) - that went by on this list... I should have written
>>>>> a blog entry so I can find that stuff again.
>>>>>
>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>> >
>>>>> >
>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>> >>
>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>> >> >
>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it puts
>>>>> me
>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>> >> >
>>>>> >> >
>>>>> >> > IP address: nix six.
>>>>> >> >
>>>>> >> > My thoughts shall follow later.
>>>>> >>
>>>>> >> OK, so here we go.
>>>>> >>
>>>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>>>> really
>>>>> >> laden with so many caveats that it's not funny. I took our new work
>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>> finally
>>>>> >> got around to set it up. It's on a roaming subscription because our
>>>>> >> badly built-up campus really isn't ideal in terms of a clear view
>>>>> of the
>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>> adapter,
>>>>> >> not the WiFi?
>>>>> >>
>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>> Starlink
>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>> from
>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>> are few
>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>> >>
>>>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>>>> (which
>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>> Puwera,
>>>>> >> the most distant of the three and about 130 km away from me as the
>>>>> crow
>>>>> >> flies. Read: My dishy can use any satellite that any of these three
>>>>> can
>>>>> >> see, and then depending on where I put it and how much of the
>>>>> southern
>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away, although
>>>>> that
>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>> options
>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>> Easy: If
>>>>> >> your nearest teleport is close by, then the set of satellites that
>>>>> the
>>>>> >> teleport can see and the set that you can see is almost the same,
>>>>> so you
>>>>> >> can essentially stick with the same satellite while it's in view
>>>>> for you
>>>>> >> because it'll also be in view for the teleport. Pretty much any bird
>>>>> >> above you will do.
>>>>> >>
>>>>> >> And because I don't get a lot of competition from other users in my
>>>>> area
>>>>> >> vying for one of the few available satellites that can see both us
>>>>> and
>>>>> >> the teleport, this is about as good as it gets at 37S latitude. If
>>>>> I'd
>>>>> >> want it any better, I'd have to move a lot further south.
>>>>> >>
>>>>> >> It'd be interesting to hear from Jonathan what the availability of
>>>>> home
>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>> notice
>>>>> >> two teleports each about 160 km away, which is also not too bad. I
>>>>> also
>>>>> >> note Starlink availability in the area is restricted at the moment -
>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>> means
>>>>> >> that competition for bird capacity can't be too bad - for whatever
>>>>> reason.
>>>>> >
>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>> broadband options. But there are also many people that live outside cities,
>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>> and Texas is probably ideal conditions for Starlink.
>>>>> >>
>>>>> >>
>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>> whatever I
>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>> hops
>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>> >> queues, not one. Many of them will have cross traffic. So it's a bit
>>>>> >> hard to tell where our packets really get to wait or where they get
>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>> between 1
>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can
>>>>> all
>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a
>>>>> few
>>>>> >> Mb/s where it's obvious where the dominant long latency bottleneck
>>>>> in
>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>> whether a
>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>> system or
>>>>> >> somewhere between Starlink's systems and the target system.
>>>>> >>
>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>> along the
>>>>> >> chain that occasionally fills up.
>>>>> >>
>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>> inevitably
>>>>> >> associated with a change in RTT and in most cases also available
>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>> >> satellite while your TCP is still trying to respond to whatever it
>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>> your
>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>> should be.
>>>>> >>
>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>> rate was
>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>> that
>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>> Group)
>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>> rates.
>>>>> >>
>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>> >>
>>>>> >> Note the variation though: More or less a factor of two between
>>>>> highest
>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>> tests
>>>>> >> to tell.
>>>>> >>
>>>>> >> What these tests do is establish a ballpark.
>>>>> >>
>>>>> >> I'm currently repeating tests with dish placed on a trestle closer
>>>>> to
>>>>> >> the heavens. This seems to have translated into fewer outages / ping
>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the ground
>>>>> on
>>>>> >> my deck). Still good enough for a lengthy video Skype call with my
>>>>> folks
>>>>> >> in Germany, although they did comment about reduced video quality.
>>>>> But
>>>>> >> maybe that was the lighting or the different background as I wasn't
>>>>> in
>>>>> >> my usual spot with my laptop when I called them.
>>>>> >
>>>>> > Clear view of the sky is king for Starlink reliability. I've got my
>>>>> dishy mounted on the back fence, looking up over an empty field, so it's
>>>>> pretty much best-case scenario here.
>>>>> >>
>>>>> >>
>>>>> >> --
>>>>> >>
>>>>> >> ****************************************************************
>>>>> >> Dr. Ulrich Speidel
>>>>> >>
>>>>> >> School of Computer Science
>>>>> >>
>>>>> >> Room 303S.594 (City Campus)
>>>>> >>
>>>>> >> The University of Auckland
>>>>> >> u.speidel@auckland.ac.nz
>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>> >> ****************************************************************
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> _______________________________________________
>>>>> >> Starlink mailing list
>>>>> >> Starlink@lists.bufferbloat.net
>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>> >
>>>>> > _______________________________________________
>>>>> > Starlink mailing list
>>>>> > Starlink@lists.bufferbloat.net
>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>
>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
[-- Attachment #1.2: Type: text/html, Size: 15131 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 20:49 ` Dave Taht
@ 2023-01-13 21:25 ` Luis A. Cornejo
2023-01-13 21:30 ` Nathan Owens
0 siblings, 1 reply; 49+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 21:25 UTC (permalink / raw)
To: Dave Taht; +Cc: Nathan Owens, starlink
[-- Attachment #1.1: Type: text/plain, Size: 12332 bytes --]
Dave,
Here is a run the way I think you wanted it.
irtt running for 5 min to your dallas server, followed by a waveform test,
then a few seconds of inactivity, cloudflare test, a few more secs of
nothing, flent test to dallas. Packet capture is currently uploading (will
be done in 20 min or so), irtt JSON also in there (.zip file):
https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
-Luis
On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:
>
>
> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>
>> Here's the data visualization for Johnathan's Data
>>
>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>
>> You can see the path change at :12, :27, :42, :57 after the minute. Some
>> paths are clearly busier than others with increased loss, latency, and
>> jitter.
>>
>
> I am so glad to see loss and bounded delay here. Also a bit of rigor
> regarding what traffic was active locally vs on the path would be nice,
> although it seems to line up with the known 15s starlink switchover thing
> (need a name for this), in this case, doing a few speedtests
> while that irtt is running would show the impact(s) of whatever else they
> are up to.
>
> It would also be my hope that the loss distribution in the middle portion
> of this data is good, not bursty, but we don't have a tool to take apart
> that. (I am so hopeless at json)
>
>
>
>>
>>
>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
>>
>>> I’ll run my visualization code on this result this afternoon and report
>>> back!
>>>
>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> The irtt command, run with normal, light usage:
>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>
>>>> Jonathan Bennett
>>>> Hackaday.com
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com> wrote:
>>>>
>>>>> packet caps would be nice... all this is very exciting news.
>>>>>
>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>> did:
>>>>>
>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>> whatever.json
>>>>>
>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>> results, in addition to better queue management.
>>>>>
>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>> newark, atlanta, singapore, mumbai)
>>>>>
>>>>> There's an R and gnuplot script for plotting that output around here
>>>>> somewhere (I have largely personally put down the starlink project,
>>>>> loaning out mine) - that went by on this list... I should have written
>>>>> a blog entry so I can find that stuff again.
>>>>>
>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>> >
>>>>> >
>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>> >>
>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>> >> >
>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it puts
>>>>> me
>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>> >> >
>>>>> >> >
>>>>> >> > IP address: nix six.
>>>>> >> >
>>>>> >> > My thoughts shall follow later.
>>>>> >>
>>>>> >> OK, so here we go.
>>>>> >>
>>>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>>>> really
>>>>> >> laden with so many caveats that it's not funny. I took our new work
>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>> finally
>>>>> >> got around to set it up. It's on a roaming subscription because our
>>>>> >> badly built-up campus really isn't ideal in terms of a clear view
>>>>> of the
>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>> adapter,
>>>>> >> not the WiFi?
>>>>> >>
>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>> Starlink
>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>> from
>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>> are few
>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>> >>
>>>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>>>> (which
>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>> Puwera,
>>>>> >> the most distant of the three and about 130 km away from me as the
>>>>> crow
>>>>> >> flies. Read: My dishy can use any satellite that any of these three
>>>>> can
>>>>> >> see, and then depending on where I put it and how much of the
>>>>> southern
>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away, although
>>>>> that
>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>> options
>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>> Easy: If
>>>>> >> your nearest teleport is close by, then the set of satellites that
>>>>> the
>>>>> >> teleport can see and the set that you can see is almost the same,
>>>>> so you
>>>>> >> can essentially stick with the same satellite while it's in view
>>>>> for you
>>>>> >> because it'll also be in view for the teleport. Pretty much any bird
>>>>> >> above you will do.
>>>>> >>
>>>>> >> And because I don't get a lot of competition from other users in my
>>>>> area
>>>>> >> vying for one of the few available satellites that can see both us
>>>>> and
>>>>> >> the teleport, this is about as good as it gets at 37S latitude. If
>>>>> I'd
>>>>> >> want it any better, I'd have to move a lot further south.
>>>>> >>
>>>>> >> It'd be interesting to hear from Jonathan what the availability of
>>>>> home
>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>> notice
>>>>> >> two teleports each about 160 km away, which is also not too bad. I
>>>>> also
>>>>> >> note Starlink availability in the area is restricted at the moment -
>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>> means
>>>>> >> that competition for bird capacity can't be too bad - for whatever
>>>>> reason.
>>>>> >
>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>> broadband options. But there are also many people that live outside cities,
>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>> and Texas is probably ideal conditions for Starlink.
>>>>> >>
>>>>> >>
>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>> whatever I
>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>> hops
>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>> >> queues, not one. Many of them will have cross traffic. So it's a bit
>>>>> >> hard to tell where our packets really get to wait or where they get
>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>> between 1
>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can
>>>>> all
>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a
>>>>> few
>>>>> >> Mb/s where it's obvious where the dominant long latency bottleneck
>>>>> in
>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>> whether a
>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>> system or
>>>>> >> somewhere between Starlink's systems and the target system.
>>>>> >>
>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical latency
>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>> along the
>>>>> >> chain that occasionally fills up.
>>>>> >>
>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>> inevitably
>>>>> >> associated with a change in RTT and in most cases also available
>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>> >> satellite while your TCP is still trying to respond to whatever it
>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>> your
>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>> should be.
>>>>> >>
>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>> rate was
>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>> that
>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>> Group)
>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and 52.22
>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>> rates.
>>>>> >>
>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly correlating
>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>> >>
>>>>> >> Note the variation though: More or less a factor of two between
>>>>> highest
>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>> tests
>>>>> >> to tell.
>>>>> >>
>>>>> >> What these tests do is establish a ballpark.
>>>>> >>
>>>>> >> I'm currently repeating tests with dish placed on a trestle closer
>>>>> to
>>>>> >> the heavens. This seems to have translated into fewer outages / ping
>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the ground
>>>>> on
>>>>> >> my deck). Still good enough for a lengthy video Skype call with my
>>>>> folks
>>>>> >> in Germany, although they did comment about reduced video quality.
>>>>> But
>>>>> >> maybe that was the lighting or the different background as I wasn't
>>>>> in
>>>>> >> my usual spot with my laptop when I called them.
>>>>> >
>>>>> > Clear view of the sky is king for Starlink reliability. I've got my
>>>>> dishy mounted on the back fence, looking up over an empty field, so it's
>>>>> pretty much best-case scenario here.
>>>>> >>
>>>>> >>
>>>>> >> --
>>>>> >>
>>>>> >> ****************************************************************
>>>>> >> Dr. Ulrich Speidel
>>>>> >>
>>>>> >> School of Computer Science
>>>>> >>
>>>>> >> Room 303S.594 (City Campus)
>>>>> >>
>>>>> >> The University of Auckland
>>>>> >> u.speidel@auckland.ac.nz
>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>> >> ****************************************************************
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> _______________________________________________
>>>>> >> Starlink mailing list
>>>>> >> Starlink@lists.bufferbloat.net
>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>> >
>>>>> > _______________________________________________
>>>>> > Starlink mailing list
>>>>> > Starlink@lists.bufferbloat.net
>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>
>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #1.2: Type: text/html, Size: 16168 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 21:25 ` Luis A. Cornejo
@ 2023-01-13 21:30 ` Nathan Owens
2023-01-13 22:09 ` Jonathan Bennett
0 siblings, 1 reply; 49+ messages in thread
From: Nathan Owens @ 2023-01-13 21:30 UTC (permalink / raw)
To: Luis A. Cornejo; +Cc: Dave Taht, starlink
[-- Attachment #1.1.1: Type: text/plain, Size: 12927 bytes --]
Here's Luis's run -- the top line below the edge of the graph is 200ms
[image: Screenshot 2023-01-13 at 1.30.03 PM.png]
On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <luis.a.cornejo@gmail.com>
wrote:
> Dave,
>
> Here is a run the way I think you wanted it.
>
> irtt running for 5 min to your dallas server, followed by a waveform test,
> then a few seconds of inactivity, cloudflare test, a few more secs of
> nothing, flent test to dallas. Packet capture is currently uploading (will
> be done in 20 min or so), irtt JSON also in there (.zip file):
>
>
> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>
> -Luis
>
> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>>
>>
>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>>
>>> Here's the data visualization for Johnathan's Data
>>>
>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>
>>> You can see the path change at :12, :27, :42, :57 after the minute. Some
>>> paths are clearly busier than others with increased loss, latency, and
>>> jitter.
>>>
>>
>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>> regarding what traffic was active locally vs on the path would be nice,
>> although it seems to line up with the known 15s starlink switchover thing
>> (need a name for this), in this case, doing a few speedtests
>> while that irtt is running would show the impact(s) of whatever else they
>> are up to.
>>
>> It would also be my hope that the loss distribution in the middle portion
>> of this data is good, not bursty, but we don't have a tool to take apart
>> that. (I am so hopeless at json)
>>
>>
>>
>>>
>>>
>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
>>>
>>>> I’ll run my visualization code on this result this afternoon and report
>>>> back!
>>>>
>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>>
>>>>> The irtt command, run with normal, light usage:
>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>
>>>>> Jonathan Bennett
>>>>> Hackaday.com
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>
>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>>> did:
>>>>>>
>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>>> whatever.json
>>>>>>
>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>> results, in addition to better queue management.
>>>>>>
>>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>
>>>>>> There's an R and gnuplot script for plotting that output around here
>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>> loaning out mine) - that went by on this list... I should have written
>>>>>> a blog entry so I can find that stuff again.
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>> >
>>>>>> >
>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>> >>
>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>> >> >
>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>> puts me
>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > IP address: nix six.
>>>>>> >> >
>>>>>> >> > My thoughts shall follow later.
>>>>>> >>
>>>>>> >> OK, so here we go.
>>>>>> >>
>>>>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>>>>> really
>>>>>> >> laden with so many caveats that it's not funny. I took our new work
>>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>>> finally
>>>>>> >> got around to set it up. It's on a roaming subscription because our
>>>>>> >> badly built-up campus really isn't ideal in terms of a clear view
>>>>>> of the
>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>> adapter,
>>>>>> >> not the WiFi?
>>>>>> >>
>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>> Starlink
>>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>>> from
>>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>>> are few
>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>> >>
>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>>>>> (which
>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>>> Puwera,
>>>>>> >> the most distant of the three and about 130 km away from me as the
>>>>>> crow
>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>> three can
>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>> southern
>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away, although
>>>>>> that
>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>> options
>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>> Easy: If
>>>>>> >> your nearest teleport is close by, then the set of satellites that
>>>>>> the
>>>>>> >> teleport can see and the set that you can see is almost the same,
>>>>>> so you
>>>>>> >> can essentially stick with the same satellite while it's in view
>>>>>> for you
>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>> bird
>>>>>> >> above you will do.
>>>>>> >>
>>>>>> >> And because I don't get a lot of competition from other users in
>>>>>> my area
>>>>>> >> vying for one of the few available satellites that can see both us
>>>>>> and
>>>>>> >> the teleport, this is about as good as it gets at 37S latitude. If
>>>>>> I'd
>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>> >>
>>>>>> >> It'd be interesting to hear from Jonathan what the availability of
>>>>>> home
>>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>> notice
>>>>>> >> two teleports each about 160 km away, which is also not too bad. I
>>>>>> also
>>>>>> >> note Starlink availability in the area is restricted at the moment
>>>>>> -
>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>> means
>>>>>> >> that competition for bird capacity can't be too bad - for whatever
>>>>>> reason.
>>>>>> >
>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>> >>
>>>>>> >>
>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>> whatever I
>>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>>> hops
>>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>>> >> queues, not one. Many of them will have cross traffic. So it's a
>>>>>> bit
>>>>>> >> hard to tell where our packets really get to wait or where they get
>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>> between 1
>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can
>>>>>> all
>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a
>>>>>> few
>>>>>> >> Mb/s where it's obvious where the dominant long latency bottleneck
>>>>>> in
>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>> whether a
>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>> system or
>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>> >>
>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>> latency
>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>> along the
>>>>>> >> chain that occasionally fills up.
>>>>>> >>
>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>> inevitably
>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>> >> satellite while your TCP is still trying to respond to whatever it
>>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>>> your
>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>> should be.
>>>>>> >>
>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>> rate was
>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>>> that
>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>> Group)
>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>> 52.22
>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>>> rates.
>>>>>> >>
>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up and
>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>> correlating
>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>> >>
>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>> highest
>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>>> tests
>>>>>> >> to tell.
>>>>>> >>
>>>>>> >> What these tests do is establish a ballpark.
>>>>>> >>
>>>>>> >> I'm currently repeating tests with dish placed on a trestle closer
>>>>>> to
>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>> ping
>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>> ground on
>>>>>> >> my deck). Still good enough for a lengthy video Skype call with my
>>>>>> folks
>>>>>> >> in Germany, although they did comment about reduced video quality.
>>>>>> But
>>>>>> >> maybe that was the lighting or the different background as I
>>>>>> wasn't in
>>>>>> >> my usual spot with my laptop when I called them.
>>>>>> >
>>>>>> > Clear view of the sky is king for Starlink reliability. I've got my
>>>>>> dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>> pretty much best-case scenario here.
>>>>>> >>
>>>>>> >>
>>>>>> >> --
>>>>>> >>
>>>>>> >> ****************************************************************
>>>>>> >> Dr. Ulrich Speidel
>>>>>> >>
>>>>>> >> School of Computer Science
>>>>>> >>
>>>>>> >> Room 303S.594 (City Campus)
>>>>>> >>
>>>>>> >> The University of Auckland
>>>>>> >> u.speidel@auckland.ac.nz
>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>> >> ****************************************************************
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >> _______________________________________________
>>>>>> >> Starlink mailing list
>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Starlink mailing list
>>>>>> > Starlink@lists.bufferbloat.net
>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>
>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
[-- Attachment #1.1.2: Type: text/html, Size: 16759 bytes --]
[-- Attachment #1.2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #1.3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #2: lc_starlink_irtt.pdf --]
[-- Type: application/pdf, Size: 697557 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 21:30 ` Nathan Owens
@ 2023-01-13 22:09 ` Jonathan Bennett
2023-01-13 22:30 ` Luis A. Cornejo
` (2 more replies)
0 siblings, 3 replies; 49+ messages in thread
From: Jonathan Bennett @ 2023-01-13 22:09 UTC (permalink / raw)
To: Nathan Owens; +Cc: Luis A. Cornejo, starlink
[-- Attachment #1.1: Type: text/plain, Size: 13994 bytes --]
The irtt run finished a few seconds before the flent run, but here are the
results:
https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
[image: image.png]
[image: image.png]
Jonathan Bennett
Hackaday.com
On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
starlink@lists.bufferbloat.net> wrote:
> Here's Luis's run -- the top line below the edge of the graph is 200ms
> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>
>
> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <luis.a.cornejo@gmail.com>
> wrote:
>
>> Dave,
>>
>> Here is a run the way I think you wanted it.
>>
>> irtt running for 5 min to your dallas server, followed by a waveform
>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>> nothing, flent test to dallas. Packet capture is currently uploading (will
>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>
>>
>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>
>> -Luis
>>
>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>>
>>>
>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>>>
>>>> Here's the data visualization for Johnathan's Data
>>>>
>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>
>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>> jitter.
>>>>
>>>
>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>> regarding what traffic was active locally vs on the path would be nice,
>>> although it seems to line up with the known 15s starlink switchover thing
>>> (need a name for this), in this case, doing a few speedtests
>>> while that irtt is running would show the impact(s) of whatever else
>>> they are up to.
>>>
>>> It would also be my hope that the loss distribution in the middle
>>> portion of this data is good, not bursty, but we don't have a tool to take
>>> apart that. (I am so hopeless at json)
>>>
>>>
>>>
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io> wrote:
>>>>
>>>>> I’ll run my visualization code on this result this afternoon and
>>>>> report back!
>>>>>
>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>>> The irtt command, run with normal, light usage:
>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>
>>>>>> Jonathan Bennett
>>>>>> Hackaday.com
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>
>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>>>> did:
>>>>>>>
>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>>>> whatever.json
>>>>>>>
>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>> results, in addition to better queue management.
>>>>>>>
>>>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>
>>>>>>> There's an R and gnuplot script for plotting that output around here
>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>> written
>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>> >
>>>>>>> >
>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>> >>
>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>> >> >
>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>> puts me
>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>> >> >
>>>>>>> >> >
>>>>>>> >> > IP address: nix six.
>>>>>>> >> >
>>>>>>> >> > My thoughts shall follow later.
>>>>>>> >>
>>>>>>> >> OK, so here we go.
>>>>>>> >>
>>>>>>> >> I'm always a bit skeptical when it comes to speed tests - they're
>>>>>>> really
>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>> work
>>>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>>>> finally
>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>> our
>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear view
>>>>>>> of the
>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>> adapter,
>>>>>>> >> not the WiFi?
>>>>>>> >>
>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>> Starlink
>>>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>>>> from
>>>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>>>> are few
>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>> >>
>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by: Clevedon
>>>>>>> (which
>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>>>> Puwera,
>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>> the crow
>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>> three can
>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>> southern
>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>> although that
>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>> options
>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>> Easy: If
>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>> that the
>>>>>>> >> teleport can see and the set that you can see is almost the same,
>>>>>>> so you
>>>>>>> >> can essentially stick with the same satellite while it's in view
>>>>>>> for you
>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>> bird
>>>>>>> >> above you will do.
>>>>>>> >>
>>>>>>> >> And because I don't get a lot of competition from other users in
>>>>>>> my area
>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>> us and
>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>> If I'd
>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>> >>
>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>> of home
>>>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>> notice
>>>>>>> >> two teleports each about 160 km away, which is also not too bad.
>>>>>>> I also
>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>> moment -
>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>> means
>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>> whatever reason.
>>>>>>> >
>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>> whatever I
>>>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>>>> hops
>>>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's a
>>>>>>> bit
>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>> get
>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>> between 1
>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they can
>>>>>>> all
>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of a
>>>>>>> few
>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>> bottleneck in
>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>> whether a
>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>> system or
>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>> >>
>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>> latency
>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>> along the
>>>>>>> >> chain that occasionally fills up.
>>>>>>> >>
>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>> inevitably
>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>> >> satellite while your TCP is still trying to respond to whatever it
>>>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>>>> your
>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>> should be.
>>>>>>> >>
>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four Ookla
>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>> rate was
>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>>>> that
>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>> Group)
>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>> 52.22
>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to rates
>>>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>>>> rates.
>>>>>>> >>
>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>> and
>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>> correlating
>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>> >>
>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>> highest
>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in my
>>>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>>>> tests
>>>>>>> >> to tell.
>>>>>>> >>
>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>> >>
>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>> closer to
>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>> ping
>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>> ground on
>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>> my folks
>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>> quality. But
>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>> wasn't in
>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>> >
>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>> pretty much best-case scenario here.
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> --
>>>>>>> >>
>>>>>>> >> ****************************************************************
>>>>>>> >> Dr. Ulrich Speidel
>>>>>>> >>
>>>>>>> >> School of Computer Science
>>>>>>> >>
>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>> >>
>>>>>>> >> The University of Auckland
>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>> >> ****************************************************************
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> _______________________________________________
>>>>>>> >> Starlink mailing list
>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > Starlink mailing list
>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>
>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>>
>>>
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>>
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #1.2: Type: text/html, Size: 18652 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 22:09 ` Jonathan Bennett
@ 2023-01-13 22:30 ` Luis A. Cornejo
2023-01-13 22:32 ` Dave Taht
2023-01-13 23:44 ` Dave Taht
2 siblings, 0 replies; 49+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 22:30 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Nathan Owens, starlink
[-- Attachment #1.1: Type: text/plain, Size: 14590 bytes --]
Johnathan,
Is your IPv6 address routable and are you accepting pings?
-Luis
On Fri, Jan 13, 2023 at 4:09 PM Jonathan Bennett <
jonathanbennett@hackaday.com> wrote:
> The irtt run finished a few seconds before the flent run, but here are the
> results:
>
>
> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>
> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>
> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>
>
> [image: image.png]
> [image: image.png]
>
>
> Jonathan Bennett
> Hackaday.com
>
>
> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>
>>
>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> wrote:
>>
>>> Dave,
>>>
>>> Here is a run the way I think you wanted it.
>>>
>>> irtt running for 5 min to your dallas server, followed by a waveform
>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>
>>>
>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>
>>> -Luis
>>>
>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>>>>
>>>>> Here's the data visualization for Johnathan's Data
>>>>>
>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>
>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>> jitter.
>>>>>
>>>>
>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>> regarding what traffic was active locally vs on the path would be nice,
>>>> although it seems to line up with the known 15s starlink switchover thing
>>>> (need a name for this), in this case, doing a few speedtests
>>>> while that irtt is running would show the impact(s) of whatever else
>>>> they are up to.
>>>>
>>>> It would also be my hope that the loss distribution in the middle
>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>> apart that. (I am so hopeless at json)
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>> wrote:
>>>>>
>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>> report back!
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>>> The irtt command, run with normal, light usage:
>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>
>>>>>>> Jonathan Bennett
>>>>>>> Hackaday.com
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>
>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>>>>> did:
>>>>>>>>
>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>>>>> whatever.json
>>>>>>>>
>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>> results, in addition to better queue management.
>>>>>>>>
>>>>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>
>>>>>>>> There's an R and gnuplot script for plotting that output around here
>>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>> written
>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>> >>
>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>> >> >
>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>> puts me
>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> > IP address: nix six.
>>>>>>>> >> >
>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>> >>
>>>>>>>> >> OK, so here we go.
>>>>>>>> >>
>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>> they're really
>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>> work
>>>>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>>>>> finally
>>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>>> our
>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>> view of the
>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>> adapter,
>>>>>>>> >> not the WiFi?
>>>>>>>> >>
>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>> Starlink
>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>>>>> from
>>>>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>>>>> are few
>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>> >>
>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>> Clevedon (which
>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>>>>> Puwera,
>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>> the crow
>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>> three can
>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>> southern
>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>> although that
>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>> options
>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>> Easy: If
>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>> that the
>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>> same, so you
>>>>>>>> >> can essentially stick with the same satellite while it's in view
>>>>>>>> for you
>>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>>> bird
>>>>>>>> >> above you will do.
>>>>>>>> >>
>>>>>>>> >> And because I don't get a lot of competition from other users in
>>>>>>>> my area
>>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>>> us and
>>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>>> If I'd
>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>> >>
>>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>>> of home
>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>> notice
>>>>>>>> >> two teleports each about 160 km away, which is also not too bad.
>>>>>>>> I also
>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>> moment -
>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>>> means
>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>> whatever reason.
>>>>>>>> >
>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>> whatever I
>>>>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>>>>> hops
>>>>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's a
>>>>>>>> bit
>>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>>> get
>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>> between 1
>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>> can all
>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of
>>>>>>>> a few
>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>> bottleneck in
>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>> whether a
>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>> system or
>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>> >>
>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>> latency
>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>> along the
>>>>>>>> >> chain that occasionally fills up.
>>>>>>>> >>
>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>> inevitably
>>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>> >> satellite while your TCP is still trying to respond to whatever
>>>>>>>> it
>>>>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>>>>> your
>>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>>> should be.
>>>>>>>> >>
>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>> Ookla
>>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>>> rate was
>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>>>>> that
>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>> Group)
>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>> 52.22
>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>> rates
>>>>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>>>>> rates.
>>>>>>>> >>
>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>>> and
>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>> correlating
>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>> >>
>>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>>> highest
>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in
>>>>>>>> my
>>>>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>>>>> tests
>>>>>>>> >> to tell.
>>>>>>>> >>
>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>> >>
>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>> closer to
>>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>>> ping
>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>> ground on
>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>>> my folks
>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>> quality. But
>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>> wasn't in
>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>> >
>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>>> pretty much best-case scenario here.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> --
>>>>>>>> >>
>>>>>>>> >> ****************************************************************
>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>> >>
>>>>>>>> >> School of Computer Science
>>>>>>>> >>
>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>> >>
>>>>>>>> >> The University of Auckland
>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>> >> ****************************************************************
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> _______________________________________________
>>>>>>>> >> Starlink mailing list
>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>> >
>>>>>>>> > _______________________________________________
>>>>>>>> > Starlink mailing list
>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>
>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>>
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
[-- Attachment #1.2: Type: text/html, Size: 19177 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 22:09 ` Jonathan Bennett
2023-01-13 22:30 ` Luis A. Cornejo
@ 2023-01-13 22:32 ` Dave Taht
2023-01-13 22:36 ` Luis A. Cornejo
2023-01-13 22:42 ` Jonathan Bennett
2023-01-13 23:44 ` Dave Taht
2 siblings, 2 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-13 22:32 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Nathan Owens, starlink
[-- Attachment #1.1.1: Type: text/plain, Size: 15803 bytes --]
thank you all. in both the flent cases there appears to be no tcp_rtt
statistics, did you run with --socket-stats?
(That seems to be a new bug, both with sampling correctly, and it's either
in newer linuxes or in flent itself. I hate to ask ya but could you install
the git version of flent?)
Thank you for the packet capture!!!! I'm still downloading.
Anyway, the "celabon" flent plot clearly shows the inverse relationship
between latency and throughput still in this starlink terminal, so there is
no AQM in play there, darn it. (in my fq_codeled world the latency stays
flat, only the throughput changes)
So I am incredibly puzzled now at the ostensibly awesome waveform test
result (and need to look at that capture, and/or get tcp rtt stats)
The other plot (luis's) shows incredibly consistent latency and bounded
throughput at about 6mbit.
Patiently awaiting that download to complete.
On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
starlink@lists.bufferbloat.net> wrote:
> The irtt run finished a few seconds before the flent run, but here are the
> results:
>
>
> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>
> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>
> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>
>
> [image: image.png]
> [image: image.png]
>
>
> Jonathan Bennett
> Hackaday.com
>
>
> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>
>>
>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> wrote:
>>
>>> Dave,
>>>
>>> Here is a run the way I think you wanted it.
>>>
>>> irtt running for 5 min to your dallas server, followed by a waveform
>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>
>>>
>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>
>>> -Luis
>>>
>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>>>>
>>>>> Here's the data visualization for Johnathan's Data
>>>>>
>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>
>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>> jitter.
>>>>>
>>>>
>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>> regarding what traffic was active locally vs on the path would be nice,
>>>> although it seems to line up with the known 15s starlink switchover thing
>>>> (need a name for this), in this case, doing a few speedtests
>>>> while that irtt is running would show the impact(s) of whatever else
>>>> they are up to.
>>>>
>>>> It would also be my hope that the loss distribution in the middle
>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>> apart that. (I am so hopeless at json)
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>> wrote:
>>>>>
>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>> report back!
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>>> The irtt command, run with normal, light usage:
>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>
>>>>>>> Jonathan Bennett
>>>>>>> Hackaday.com
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>
>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>>>>> did:
>>>>>>>>
>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>>>>> whatever.json
>>>>>>>>
>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>> results, in addition to better queue management.
>>>>>>>>
>>>>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>
>>>>>>>> There's an R and gnuplot script for plotting that output around here
>>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>> written
>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>> >>
>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>> >> >
>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>> puts me
>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> > IP address: nix six.
>>>>>>>> >> >
>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>> >>
>>>>>>>> >> OK, so here we go.
>>>>>>>> >>
>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>> they're really
>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>> work
>>>>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>>>>> finally
>>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>>> our
>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>> view of the
>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>> adapter,
>>>>>>>> >> not the WiFi?
>>>>>>>> >>
>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>> Starlink
>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>>>>> from
>>>>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>>>>> are few
>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>> >>
>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>> Clevedon (which
>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>>>>> Puwera,
>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>> the crow
>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>> three can
>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>> southern
>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>> although that
>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>> options
>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>> Easy: If
>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>> that the
>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>> same, so you
>>>>>>>> >> can essentially stick with the same satellite while it's in view
>>>>>>>> for you
>>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>>> bird
>>>>>>>> >> above you will do.
>>>>>>>> >>
>>>>>>>> >> And because I don't get a lot of competition from other users in
>>>>>>>> my area
>>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>>> us and
>>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>>> If I'd
>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>> >>
>>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>>> of home
>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>> notice
>>>>>>>> >> two teleports each about 160 km away, which is also not too bad.
>>>>>>>> I also
>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>> moment -
>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>>> means
>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>> whatever reason.
>>>>>>>> >
>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>> whatever I
>>>>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>>>>> hops
>>>>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's a
>>>>>>>> bit
>>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>>> get
>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>> between 1
>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>> can all
>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of
>>>>>>>> a few
>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>> bottleneck in
>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>> whether a
>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>> system or
>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>> >>
>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>> latency
>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>> along the
>>>>>>>> >> chain that occasionally fills up.
>>>>>>>> >>
>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>> inevitably
>>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>> >> satellite while your TCP is still trying to respond to whatever
>>>>>>>> it
>>>>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>>>>> your
>>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>>> should be.
>>>>>>>> >>
>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>> Ookla
>>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>>> rate was
>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>>>>> that
>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>> Group)
>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>> 52.22
>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>> rates
>>>>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>>>>> rates.
>>>>>>>> >>
>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>>> and
>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>> correlating
>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>> >>
>>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>>> highest
>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in
>>>>>>>> my
>>>>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>>>>> tests
>>>>>>>> >> to tell.
>>>>>>>> >>
>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>> >>
>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>> closer to
>>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>>> ping
>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>> ground on
>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>>> my folks
>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>> quality. But
>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>> wasn't in
>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>> >
>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>>> pretty much best-case scenario here.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> --
>>>>>>>> >>
>>>>>>>> >> ****************************************************************
>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>> >>
>>>>>>>> >> School of Computer Science
>>>>>>>> >>
>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>> >>
>>>>>>>> >> The University of Auckland
>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>> >> ****************************************************************
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> _______________________________________________
>>>>>>>> >> Starlink mailing list
>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>> >
>>>>>>>> > _______________________________________________
>>>>>>>> > Starlink mailing list
>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>
>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>>
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.1.2: Type: text/html, Size: 20910 bytes --]
[-- Attachment #1.2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #1.3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #1.4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #1.5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #2: tcp_nup_-_starlink_vs_irtt.png --]
[-- Type: image/png, Size: 205282 bytes --]
[-- Attachment #3: tcp_nup_-_2starlink_vs_irtt.png --]
[-- Type: image/png, Size: 271672 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 22:32 ` Dave Taht
@ 2023-01-13 22:36 ` Luis A. Cornejo
2023-01-13 22:42 ` Jonathan Bennett
1 sibling, 0 replies; 49+ messages in thread
From: Luis A. Cornejo @ 2023-01-13 22:36 UTC (permalink / raw)
To: Dave Taht; +Cc: Jonathan Bennett, starlink
[-- Attachment #1.1: Type: text/plain, Size: 16696 bytes --]
I ran it with:
flent -H dallas.starlink.taht.net -t starlink_vs_irtt --step-size=.05
--socket-stats --test-parameter=upload_streams=4 tcp_nup
On Fri, Jan 13, 2023 at 4:33 PM Dave Taht via Starlink <
starlink@lists.bufferbloat.net> wrote:
> thank you all. in both the flent cases there appears to be no tcp_rtt
> statistics, did you run with --socket-stats?
>
> (That seems to be a new bug, both with sampling correctly, and it's either
> in newer linuxes or in flent itself. I hate to ask ya but could you install
> the git version of flent?)
>
> Thank you for the packet capture!!!! I'm still downloading.
>
> Anyway, the "celabon" flent plot clearly shows the inverse relationship
> between latency and throughput still in this starlink terminal, so there is
> no AQM in play there, darn it. (in my fq_codeled world the latency stays
> flat, only the throughput changes)
>
> So I am incredibly puzzled now at the ostensibly awesome waveform test
> result (and need to look at that capture, and/or get tcp rtt stats)
>
> The other plot (luis's) shows incredibly consistent latency and bounded
> throughput at about 6mbit.
>
> Patiently awaiting that download to complete.
>
>
> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> The irtt run finished a few seconds before the flent run, but here are
>> the results:
>>
>>
>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>
>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>
>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>
>>
>> [image: image.png]
>> [image: image.png]
>>
>>
>> Jonathan Bennett
>> Hackaday.com
>>
>>
>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>
>>>
>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>> luis.a.cornejo@gmail.com> wrote:
>>>
>>>> Dave,
>>>>
>>>> Here is a run the way I think you wanted it.
>>>>
>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>
>>>>
>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>
>>>> -Luis
>>>>
>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>> wrote:
>>>>>
>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>
>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>
>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>> jitter.
>>>>>>
>>>>>
>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>> (need a name for this), in this case, doing a few speedtests
>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>> they are up to.
>>>>>
>>>>> It would also be my hope that the loss distribution in the middle
>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>> apart that. (I am so hopeless at json)
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>> wrote:
>>>>>>
>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>> report back!
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>
>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>
>>>>>>>> Jonathan Bennett
>>>>>>>> Hackaday.com
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>
>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests
>>>>>>>>> we
>>>>>>>>> did:
>>>>>>>>>
>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>> -o whatever.json
>>>>>>>>>
>>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>
>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>> dallas,
>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>
>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>> here
>>>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>> written
>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>> >>
>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>> >> >
>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>>> puts me
>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>> >> >
>>>>>>>>> >> >
>>>>>>>>> >> > IP address: nix six.
>>>>>>>>> >> >
>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>> >>
>>>>>>>>> >> OK, so here we go.
>>>>>>>>> >>
>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>> they're really
>>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>>> work
>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>> day finally
>>>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>>>> our
>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>> view of the
>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>> adapter,
>>>>>>>>> >> not the WiFi?
>>>>>>>>> >>
>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>>> Starlink
>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>> get from
>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>> There are few
>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>> >>
>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>> Clevedon (which
>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>> and Puwera,
>>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>>> the crow
>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>> three can
>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>> southern
>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>> although that
>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>>> options
>>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>>> Easy: If
>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>> that the
>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>> same, so you
>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>> view for you
>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>>>> bird
>>>>>>>>> >> above you will do.
>>>>>>>>> >>
>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>> in my area
>>>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>>>> us and
>>>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>>>> If I'd
>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>> >>
>>>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>>>> of home
>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>> lower
>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>>> notice
>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>> bad. I also
>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>> moment -
>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>>>> means
>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>> whatever reason.
>>>>>>>>> >
>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>> whatever I
>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>> 10 hops
>>>>>>>>> >> between me and the University of Auckland via Starlink. That's
>>>>>>>>> 10
>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's
>>>>>>>>> a bit
>>>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>>>> get
>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>> between 1
>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>> can all
>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of
>>>>>>>>> a few
>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>> bottleneck in
>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>> whether a
>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>> system or
>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>> >>
>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>> latency
>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>>> along the
>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>> >>
>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>> inevitably
>>>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>>> >> satellite while your TCP is still trying to respond to whatever
>>>>>>>>> it
>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>> Whatever your
>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>>>> should be.
>>>>>>>>> >>
>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>> Ookla
>>>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>>>> rate was
>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>> ISP that
>>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>>> Group)
>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>>> 52.22
>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>> rates
>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>> worst rates.
>>>>>>>>> >>
>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>>>> and
>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>> correlating
>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>> >>
>>>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>>>> highest
>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in
>>>>>>>>> my
>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>> few tests
>>>>>>>>> >> to tell.
>>>>>>>>> >>
>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>> >>
>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>> closer to
>>>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>>>> ping
>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>> ground on
>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>>>> my folks
>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>> quality. But
>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>> wasn't in
>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>> >
>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>>>> pretty much best-case scenario here.
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> --
>>>>>>>>> >>
>>>>>>>>> >> ****************************************************************
>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>> >>
>>>>>>>>> >> School of Computer Science
>>>>>>>>> >>
>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>> >>
>>>>>>>>> >> The University of Auckland
>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>> >> ****************************************************************
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> _______________________________________________
>>>>>>>>> >> Starlink mailing list
>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>> >
>>>>>>>>> > _______________________________________________
>>>>>>>>> > Starlink mailing list
>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>>
>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>
>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #1.2: Type: text/html, Size: 21845 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 22:32 ` Dave Taht
2023-01-13 22:36 ` Luis A. Cornejo
@ 2023-01-13 22:42 ` Jonathan Bennett
2023-01-13 22:49 ` Dave Taht
1 sibling, 1 reply; 49+ messages in thread
From: Jonathan Bennett @ 2023-01-13 22:42 UTC (permalink / raw)
To: Dave Taht; +Cc: Nathan Owens, starlink
[-- Attachment #1.1.1: Type: text/plain, Size: 16562 bytes --]
Flent run using git, and ./run-flent -H atlanta.starlink.taht.net -t
starlink_vs_irtt --step-size=.05 --socket-stats
--test-parameter=upload_streams=4 tcp_nup
Jonathan Bennett
Hackaday.com
On Fri, Jan 13, 2023 at 4:33 PM Dave Taht <dave.taht@gmail.com> wrote:
> thank you all. in both the flent cases there appears to be no tcp_rtt
> statistics, did you run with --socket-stats?
>
> (That seems to be a new bug, both with sampling correctly, and it's either
> in newer linuxes or in flent itself. I hate to ask ya but could you install
> the git version of flent?)
>
> Thank you for the packet capture!!!! I'm still downloading.
>
> Anyway, the "celabon" flent plot clearly shows the inverse relationship
> between latency and throughput still in this starlink terminal, so there is
> no AQM in play there, darn it. (in my fq_codeled world the latency stays
> flat, only the throughput changes)
>
> So I am incredibly puzzled now at the ostensibly awesome waveform test
> result (and need to look at that capture, and/or get tcp rtt stats)
>
> The other plot (luis's) shows incredibly consistent latency and bounded
> throughput at about 6mbit.
>
> Patiently awaiting that download to complete.
>
>
> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> The irtt run finished a few seconds before the flent run, but here are
>> the results:
>>
>>
>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>
>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>
>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>
>>
>> [image: image.png]
>> [image: image.png]
>>
>>
>> Jonathan Bennett
>> Hackaday.com
>>
>>
>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>
>>>
>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>> luis.a.cornejo@gmail.com> wrote:
>>>
>>>> Dave,
>>>>
>>>> Here is a run the way I think you wanted it.
>>>>
>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>
>>>>
>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>
>>>> -Luis
>>>>
>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>> wrote:
>>>>>
>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>
>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>
>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>> jitter.
>>>>>>
>>>>>
>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>> (need a name for this), in this case, doing a few speedtests
>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>> they are up to.
>>>>>
>>>>> It would also be my hope that the loss distribution in the middle
>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>> apart that. (I am so hopeless at json)
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>> wrote:
>>>>>>
>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>> report back!
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>
>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>
>>>>>>>> Jonathan Bennett
>>>>>>>> Hackaday.com
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>
>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests
>>>>>>>>> we
>>>>>>>>> did:
>>>>>>>>>
>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>> -o whatever.json
>>>>>>>>>
>>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>
>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>> dallas,
>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>
>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>> here
>>>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>> written
>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>> >>
>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>> >> >
>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>>> puts me
>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>> >> >
>>>>>>>>> >> >
>>>>>>>>> >> > IP address: nix six.
>>>>>>>>> >> >
>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>> >>
>>>>>>>>> >> OK, so here we go.
>>>>>>>>> >>
>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>> they're really
>>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>>> work
>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>> day finally
>>>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>>>> our
>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>> view of the
>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>> adapter,
>>>>>>>>> >> not the WiFi?
>>>>>>>>> >>
>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>>> Starlink
>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>> get from
>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>> There are few
>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>> >>
>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>> Clevedon (which
>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>> and Puwera,
>>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>>> the crow
>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>> three can
>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>> southern
>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>> although that
>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>>> options
>>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>>> Easy: If
>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>> that the
>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>> same, so you
>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>> view for you
>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>>>> bird
>>>>>>>>> >> above you will do.
>>>>>>>>> >>
>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>> in my area
>>>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>>>> us and
>>>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>>>> If I'd
>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>> >>
>>>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>>>> of home
>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>> lower
>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>>> notice
>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>> bad. I also
>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>> moment -
>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>>>> means
>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>> whatever reason.
>>>>>>>>> >
>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>> whatever I
>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>> 10 hops
>>>>>>>>> >> between me and the University of Auckland via Starlink. That's
>>>>>>>>> 10
>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's
>>>>>>>>> a bit
>>>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>>>> get
>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>> between 1
>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>> can all
>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of
>>>>>>>>> a few
>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>> bottleneck in
>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>> whether a
>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>> system or
>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>> >>
>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>> latency
>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>>> along the
>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>> >>
>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>> inevitably
>>>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>>> >> satellite while your TCP is still trying to respond to whatever
>>>>>>>>> it
>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>> Whatever your
>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>>>> should be.
>>>>>>>>> >>
>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>> Ookla
>>>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>>>> rate was
>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>> ISP that
>>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>>> Group)
>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>>> 52.22
>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>> rates
>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>> worst rates.
>>>>>>>>> >>
>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>>>> and
>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>> correlating
>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>> >>
>>>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>>>> highest
>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in
>>>>>>>>> my
>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>> few tests
>>>>>>>>> >> to tell.
>>>>>>>>> >>
>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>> >>
>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>> closer to
>>>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>>>> ping
>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>> ground on
>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>>>> my folks
>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>> quality. But
>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>> wasn't in
>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>> >
>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>>>> pretty much best-case scenario here.
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> --
>>>>>>>>> >>
>>>>>>>>> >> ****************************************************************
>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>> >>
>>>>>>>>> >> School of Computer Science
>>>>>>>>> >>
>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>> >>
>>>>>>>>> >> The University of Auckland
>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>> >> ****************************************************************
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> _______________________________________________
>>>>>>>>> >> Starlink mailing list
>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>> >
>>>>>>>>> > _______________________________________________
>>>>>>>>> > Starlink mailing list
>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>>
>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>
>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
[-- Attachment #1.1.2: Type: text/html, Size: 21806 bytes --]
[-- Attachment #1.2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #1.3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #1.4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #1.5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #2: tcp_nup-2023-01-13T163813.572871.starlink_vs_irtt.flent.gz --]
[-- Type: application/gzip, Size: 336010 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 22:42 ` Jonathan Bennett
@ 2023-01-13 22:49 ` Dave Taht
0 siblings, 0 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-13 22:49 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Nathan Owens, starlink
[-- Attachment #1.1.1: Type: text/plain, Size: 17821 bytes --]
selfishly if you could name the plot more after yourself, or your location,
it would help.
OK, in jonathan's case, I got a RTT sample out of flent. Sorry guys, this
is the same lousy TCP behavior as before, with a fixed size buffer, and
bandwidth that varies stepwise every 15 seconds, admittedly with much less
variance than we've seen before.
Waveform's test is broken, optimized for, or there's something crazy going
on. I so wanted to believe...
download of that cap almost done.
On Fri, Jan 13, 2023 at 2:42 PM Jonathan Bennett <
jonathanbennett@hackaday.com> wrote:
> Flent run using git, and ./run-flent -H atlanta.starlink.taht.net -t
> starlink_vs_irtt --step-size=.05 --socket-stats
> --test-parameter=upload_streams=4 tcp_nup
>
>
> Jonathan Bennett
> Hackaday.com
>
>
> On Fri, Jan 13, 2023 at 4:33 PM Dave Taht <dave.taht@gmail.com> wrote:
>
>> thank you all. in both the flent cases there appears to be no tcp_rtt
>> statistics, did you run with --socket-stats?
>>
>> (That seems to be a new bug, both with sampling correctly, and it's
>> either in newer linuxes or in flent itself. I hate to ask ya but could you
>> install the git version of flent?)
>>
>> Thank you for the packet capture!!!! I'm still downloading.
>>
>> Anyway, the "celabon" flent plot clearly shows the inverse relationship
>> between latency and throughput still in this starlink terminal, so there is
>> no AQM in play there, darn it. (in my fq_codeled world the latency stays
>> flat, only the throughput changes)
>>
>> So I am incredibly puzzled now at the ostensibly awesome waveform test
>> result (and need to look at that capture, and/or get tcp rtt stats)
>>
>> The other plot (luis's) shows incredibly consistent latency and bounded
>> throughput at about 6mbit.
>>
>> Patiently awaiting that download to complete.
>>
>>
>> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The irtt run finished a few seconds before the flent run, but here are
>>> the results:
>>>
>>>
>>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>>
>>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>>
>>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>>
>>>
>>> [image: image.png]
>>> [image: image.png]
>>>
>>>
>>> Jonathan Bennett
>>> Hackaday.com
>>>
>>>
>>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>>> luis.a.cornejo@gmail.com> wrote:
>>>>
>>>>> Dave,
>>>>>
>>>>> Here is a run the way I think you wanted it.
>>>>>
>>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>>
>>>>>
>>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>>
>>>>> -Luis
>>>>>
>>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>>
>>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>>
>>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>>> jitter.
>>>>>>>
>>>>>>
>>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>>> (need a name for this), in this case, doing a few speedtests
>>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>>> they are up to.
>>>>>>
>>>>>> It would also be my hope that the loss distribution in the middle
>>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>>> apart that. (I am so hopeless at json)
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>>> report back!
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>
>>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>>
>>>>>>>>> Jonathan Bennett
>>>>>>>>> Hackaday.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>>
>>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests
>>>>>>>>>> we
>>>>>>>>>> did:
>>>>>>>>>>
>>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>>> -o whatever.json
>>>>>>>>>>
>>>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>>
>>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>>> dallas,
>>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>>
>>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>>> here
>>>>>>>>>> somewhere (I have largely personally put down the starlink
>>>>>>>>>> project,
>>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>>> written
>>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>> >>
>>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>>> >> >
>>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>>>> puts me
>>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>>> >> >
>>>>>>>>>> >> >
>>>>>>>>>> >> > IP address: nix six.
>>>>>>>>>> >> >
>>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>>> >>
>>>>>>>>>> >> OK, so here we go.
>>>>>>>>>> >>
>>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>>> they're really
>>>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>>>> work
>>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>>> day finally
>>>>>>>>>> >> got around to set it up. It's on a roaming subscription
>>>>>>>>>> because our
>>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>>> view of the
>>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>>> adapter,
>>>>>>>>>> >> not the WiFi?
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>>>> Starlink
>>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>>> get from
>>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>>> There are few
>>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>>> Clevedon (which
>>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>>> and Puwera,
>>>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>>>> the crow
>>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>>> three can
>>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>>> southern
>>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>>> although that
>>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>>>> options
>>>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>>>> Easy: If
>>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>>> that the
>>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>>> same, so you
>>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>>> view for you
>>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much
>>>>>>>>>> any bird
>>>>>>>>>> >> above you will do.
>>>>>>>>>> >>
>>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>>> in my area
>>>>>>>>>> >> vying for one of the few available satellites that can see
>>>>>>>>>> both us and
>>>>>>>>>> >> the teleport, this is about as good as it gets at 37S
>>>>>>>>>> latitude. If I'd
>>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>>> >>
>>>>>>>>>> >> It'd be interesting to hear from Jonathan what the
>>>>>>>>>> availability of home
>>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>>> lower
>>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>>>> notice
>>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>>> bad. I also
>>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>>> moment -
>>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then
>>>>>>>>>> that means
>>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>>> whatever reason.
>>>>>>>>>> >
>>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>>> whatever I
>>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>>> 10 hops
>>>>>>>>>> >> between me and the University of Auckland via Starlink. That's
>>>>>>>>>> 10
>>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's
>>>>>>>>>> a bit
>>>>>>>>>> >> hard to tell where our packets really get to wait or where
>>>>>>>>>> they get
>>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>>> between 1
>>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>>> can all
>>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link
>>>>>>>>>> of a few
>>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>>> bottleneck in
>>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>>> whether a
>>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>>> system or
>>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>>> >>
>>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>>> latency
>>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>>>> along the
>>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>>> inevitably
>>>>>>>>>> >> associated with a change in RTT and in most cases also
>>>>>>>>>> available
>>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>>>> >> satellite while your TCP is still trying to respond to
>>>>>>>>>> whatever it
>>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>>> Whatever your
>>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what
>>>>>>>>>> it should be.
>>>>>>>>>> >>
>>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>>> Ookla
>>>>>>>>>> >> speedtest.net tests each to five local servers. Average
>>>>>>>>>> upload rate was
>>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>>> ISP that
>>>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>>>> Group)
>>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>>>> 52.22
>>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>>> rates
>>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>>> worst rates.
>>>>>>>>>> >>
>>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s
>>>>>>>>>> up and
>>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>>> correlating
>>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>>> >>
>>>>>>>>>> >> Note the variation though: More or less a factor of two
>>>>>>>>>> between highest
>>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky
>>>>>>>>>> in my
>>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>>> few tests
>>>>>>>>>> >> to tell.
>>>>>>>>>> >>
>>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>>> >>
>>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>>> closer to
>>>>>>>>>> >> the heavens. This seems to have translated into fewer outages
>>>>>>>>>> / ping
>>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>>> ground on
>>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call
>>>>>>>>>> with my folks
>>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>>> quality. But
>>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>>> wasn't in
>>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>>> >
>>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've
>>>>>>>>>> got my dishy mounted on the back fence, looking up over an empty field, so
>>>>>>>>>> it's pretty much best-case scenario here.
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> --
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> ****************************************************************
>>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>>> >>
>>>>>>>>>> >> School of Computer Science
>>>>>>>>>> >>
>>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>>> >>
>>>>>>>>>> >> The University of Auckland
>>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>>> >>
>>>>>>>>>> ****************************************************************
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> _______________________________________________
>>>>>>>>>> >> Starlink mailing list
>>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>> >
>>>>>>>>>> > _______________________________________________
>>>>>>>>>> > Starlink mailing list
>>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>>>
>>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>
>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>>
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.1.2: Type: text/html, Size: 23183 bytes --]
[-- Attachment #1.2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #1.3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #1.4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #1.5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #2: bennet_tcp_rtt.png --]
[-- Type: image/png, Size: 202069 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 17:02 ` Jonathan Bennett
2023-01-13 17:26 ` Dave Taht
@ 2023-01-13 22:51 ` Ulrich Speidel
1 sibling, 0 replies; 49+ messages in thread
From: Ulrich Speidel @ 2023-01-13 22:51 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: starlink
[-- Attachment #1.1: Type: text/plain, Size: 2754 bytes --]
On 14/01/2023 6:02 am, Jonathan Bennett wrote:
>
> It'd be interesting to hear from Jonathan what the availability of
> home
> broadband is like in the Dallas area. I note that it's at a lower
> latitude (33N) than Auckland, but the difference isn't huge. I notice
> two teleports each about 160 km away, which is also not too bad. I
> also
> note Starlink availability in the area is restricted at the moment -
> oversubscribed? But if Jonathan gets good data rates, then that means
> that competition for bird capacity can't be too bad - for whatever
> reason.
>
> I'm in Southwest Oklahoma, but Dallas is the nearby Starlink gateway.
> In cities, like Dallas, and Lawton where I live, there are good
> broadband options. But there are also many people that live outside
> cities, and the options are much worse. The low density userbase in
> rural Oklahoma and Texas is probably ideal conditions for Starlink.
Ah. So what actually happens here is that you're relatively close to the
Springer teleport (~120 km), and the IP address that Starlink assigns to
you gets geolocated to Dallas. Given lack of current Starlink
availability in the region that isn't correlating with population
density like in other parts of the US, we're probably talking lots of
satellite capacity for very few users here. Plus the extra fibre latency
down to Dallas is peanuts. Which explains your data. You're at
34-something north, which is not that different from Auckland.
> Clear view of the sky is king for Starlink reliability. I've got my
> dishy mounted on the back fence, looking up over an empty field, so
> it's pretty much best-case scenario here.
So-so. Your dishy probably orients itself north-facing, but you'll still
be talking via your closest teleport or teleports, so Springer or Dumas.
If Starlink can talk to satellites that are straight overhead, it will -
I've attached a photo of my current setup at home, dishy south-facing
onto my roof. I've pinged for almost a day now in this configuration and
have < 1% loss. Practically all loss events are one-offs, i.e., outages
are under 2 s.
Before that, I had the dishy on the ground in the same position, and had
around 2% ping loss. So being able to see more of the sky makes a bit of
a difference obviously, but it's not huge. Internet was "usable" in both
configurations.
--
****************************************************************
Dr. Ulrich Speidel
School of Computer Science
Room 303S.594 (City Campus)
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************
[-- Attachment #1.2: Type: text/html, Size: 4137 bytes --]
[-- Attachment #2: 20230114_074033.jpg --]
[-- Type: image/*, Size: 4480431 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-13 22:09 ` Jonathan Bennett
2023-01-13 22:30 ` Luis A. Cornejo
2023-01-13 22:32 ` Dave Taht
@ 2023-01-13 23:44 ` Dave Taht
[not found] ` <CALjsLJv5cbfHfkxqHnbjxoVHczspYvxc_jrshzs1CpHLEDWyew@mail.gmail.com>
2 siblings, 1 reply; 49+ messages in thread
From: Dave Taht @ 2023-01-13 23:44 UTC (permalink / raw)
To: Jonathan Bennett; +Cc: Nathan Owens, starlink, Sina Khanifar
[-- Attachment #1.1.1: Type: text/plain, Size: 15719 bytes --]
I am forced to conclude that waveform's upload test is broken in some cases
and has been for some time. All the joy many have been feeling about their
uplink speeds has to be cast away. Felt good, though, didn't it?
There is a *slight* possibility that there is some fq in the starlink
network on tcp port 433. The first showing normal rtt growth and loss for
cubic, the second, a low rate flow that is holding the line... but I didn't
check to see if these were sequential or parallel.
The last is a cwnd plot, clearly showing the cubic sawtooth on the upload.
It's weirdly nice to be able to follow a port 433 stream, see the tls
handshake put the website en'clair, and the rest go dark, and still trace
the behaviors.
we're so going to lose this analytical ability with quic. I'm enjoying it
while we still can.
On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
starlink@lists.bufferbloat.net> wrote:
> The irtt run finished a few seconds before the flent run, but here are the
> results:
>
>
> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>
> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>
> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>
>
> [image: image.png]
> [image: image.png]
>
>
> Jonathan Bennett
> Hackaday.com
>
>
> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>
>>
>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <luis.a.cornejo@gmail.com>
>> wrote:
>>
>>> Dave,
>>>
>>> Here is a run the way I think you wanted it.
>>>
>>> irtt running for 5 min to your dallas server, followed by a waveform
>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>
>>>
>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>
>>> -Luis
>>>
>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io> wrote:
>>>>
>>>>> Here's the data visualization for Johnathan's Data
>>>>>
>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>
>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>> jitter.
>>>>>
>>>>
>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>> regarding what traffic was active locally vs on the path would be nice,
>>>> although it seems to line up with the known 15s starlink switchover thing
>>>> (need a name for this), in this case, doing a few speedtests
>>>> while that irtt is running would show the impact(s) of whatever else
>>>> they are up to.
>>>>
>>>> It would also be my hope that the loss distribution in the middle
>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>> apart that. (I am so hopeless at json)
>>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>> wrote:
>>>>>
>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>> report back!
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>>> The irtt command, run with normal, light usage:
>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>
>>>>>>> Jonathan Bennett
>>>>>>> Hackaday.com
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>
>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests we
>>>>>>>> did:
>>>>>>>>
>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net -o
>>>>>>>> whatever.json
>>>>>>>>
>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>> results, in addition to better queue management.
>>>>>>>>
>>>>>>>> (for the record, my servers are de, london, fremont, sydney, dallas,
>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>
>>>>>>>> There's an R and gnuplot script for plotting that output around here
>>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>> written
>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>> >>
>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>> >> >
>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>> puts me
>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> > IP address: nix six.
>>>>>>>> >> >
>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>> >>
>>>>>>>> >> OK, so here we go.
>>>>>>>> >>
>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>> they're really
>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>> work
>>>>>>>> >> Starlink kit home in December to give it a try and the other day
>>>>>>>> finally
>>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>>> our
>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>> view of the
>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>> adapter,
>>>>>>>> >> not the WiFi?
>>>>>>>> >>
>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>> Starlink
>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually get
>>>>>>>> from
>>>>>>>> >> fibre to the home at under half of Starlink's price. Read: There
>>>>>>>> are few
>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>> >>
>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>> Clevedon (which
>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana and
>>>>>>>> Puwera,
>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>> the crow
>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>> three can
>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>> southern
>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>> although that
>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>> options
>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>> Easy: If
>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>> that the
>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>> same, so you
>>>>>>>> >> can essentially stick with the same satellite while it's in view
>>>>>>>> for you
>>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>>> bird
>>>>>>>> >> above you will do.
>>>>>>>> >>
>>>>>>>> >> And because I don't get a lot of competition from other users in
>>>>>>>> my area
>>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>>> us and
>>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>>> If I'd
>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>> >>
>>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>>> of home
>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a lower
>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>> notice
>>>>>>>> >> two teleports each about 160 km away, which is also not too bad.
>>>>>>>> I also
>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>> moment -
>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>>> means
>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>> whatever reason.
>>>>>>>> >
>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>> whatever I
>>>>>>>> >> talk to in terms of my communications. Traceroute shows about 10
>>>>>>>> hops
>>>>>>>> >> between me and the University of Auckland via Starlink. That's 10
>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's a
>>>>>>>> bit
>>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>>> get
>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>> between 1
>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>> can all
>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of
>>>>>>>> a few
>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>> bottleneck in
>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>> whether a
>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>> system or
>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>> >>
>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>> latency
>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>> along the
>>>>>>>> >> chain that occasionally fills up.
>>>>>>>> >>
>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>> inevitably
>>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>> >> satellite while your TCP is still trying to respond to whatever
>>>>>>>> it
>>>>>>>> >> thought the queue on the previous bird was doing. Read: Whatever
>>>>>>>> your
>>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>>> should be.
>>>>>>>> >>
>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>> Ookla
>>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>>> rate was
>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the ISP
>>>>>>>> that
>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>> Group)
>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>> 52.22
>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>> rates
>>>>>>>> >> observed. In fact, they were the ISP with consistently the worst
>>>>>>>> rates.
>>>>>>>> >>
>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>>> and
>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>> correlating
>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>> >>
>>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>>> highest
>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in
>>>>>>>> my
>>>>>>>> >> tests? Or is there something systematic behind this? Way too few
>>>>>>>> tests
>>>>>>>> >> to tell.
>>>>>>>> >>
>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>> >>
>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>> closer to
>>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>>> ping
>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>> ground on
>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>>> my folks
>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>> quality. But
>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>> wasn't in
>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>> >
>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>>> pretty much best-case scenario here.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> --
>>>>>>>> >>
>>>>>>>> >> ****************************************************************
>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>> >>
>>>>>>>> >> School of Computer Science
>>>>>>>> >>
>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>> >>
>>>>>>>> >> The University of Auckland
>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>> >> ****************************************************************
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> _______________________________________________
>>>>>>>> >> Starlink mailing list
>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>> >
>>>>>>>> > _______________________________________________
>>>>>>>> > Starlink mailing list
>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>
>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>>
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> Dave Täht CEO, TekLibre, LLC
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.1.2: Type: text/html, Size: 20796 bytes --]
[-- Attachment #1.2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #1.3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #1.4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #1.5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #2: 433-rtt-low.png --]
[-- Type: image/png, Size: 45065 bytes --]
[-- Attachment #3: 433-rtt.png --]
[-- Type: image/png, Size: 74416 bytes --]
[-- Attachment #4: waveformbroken_windowscale.png --]
[-- Type: image/png, Size: 64924 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
[not found] ` <CALjsLJv5cbfHfkxqHnbjxoVHczspYvxc_jrshzs1CpHLEDWyew@mail.gmail.com>
@ 2023-01-14 14:20 ` Nathan Owens
2023-01-14 15:53 ` Dave Taht
2023-01-14 15:52 ` Dave Taht
1 sibling, 1 reply; 49+ messages in thread
From: Nathan Owens @ 2023-01-14 14:20 UTC (permalink / raw)
To: Dave Taht; +Cc: Jonathan Bennett, starlink, Sina Khanifar
[-- Attachment #1.1: Type: text/plain, Size: 17272 bytes --]
Sorry, was rejected from the listserv, here's the google drive link for all
3 IRTT's visualized
https://drive.google.com/drive/folders/1SbUyvdlfdllgqrEcSSxvGdn3yN40vSx-?usp=share_link
On Sat, Jan 14, 2023 at 6:14 AM Nathan Owens <nathan@nathan.io> wrote:
> I realized I goofed up my visualizations, here's all of them again:
> I see a ton of loss on all of these!
>
> While Luis was downloading/uploading... oof.
> [image: Screenshot 2023-01-14 at 6.13.50 AM.png]
>
> [image: Screenshot 2023-01-14 at 6.12.53 AM.png]
>
>
>
>
> On Fri, Jan 13, 2023 at 3:44 PM Dave Taht <dave.taht@gmail.com> wrote:
>
>> I am forced to conclude that waveform's upload test is broken in some
>> cases and has been for some time. All the joy many have been feeling about
>> their uplink speeds has to be cast away. Felt good, though, didn't it?
>>
>> There is a *slight* possibility that there is some fq in the starlink
>> network on tcp port 433. The first showing normal rtt growth and loss for
>> cubic, the second, a low rate flow that is holding the line... but I didn't
>> check to see if these were sequential or parallel.
>>
>> The last is a cwnd plot, clearly showing the cubic sawtooth on the
>> upload.
>>
>> It's weirdly nice to be able to follow a port 433 stream, see the tls
>> handshake put the website en'clair, and the rest go dark, and still trace
>> the behaviors.
>>
>> we're so going to lose this analytical ability with quic. I'm enjoying it
>> while we still can.
>>
>>
>> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The irtt run finished a few seconds before the flent run, but here are
>>> the results:
>>>
>>>
>>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>>
>>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>>
>>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>>
>>>
>>> [image: image.png]
>>> [image: image.png]
>>>
>>>
>>> Jonathan Bennett
>>> Hackaday.com
>>>
>>>
>>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>>> luis.a.cornejo@gmail.com> wrote:
>>>>
>>>>> Dave,
>>>>>
>>>>> Here is a run the way I think you wanted it.
>>>>>
>>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>>
>>>>>
>>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>>
>>>>> -Luis
>>>>>
>>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>>
>>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>>
>>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>>> jitter.
>>>>>>>
>>>>>>
>>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>>> (need a name for this), in this case, doing a few speedtests
>>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>>> they are up to.
>>>>>>
>>>>>> It would also be my hope that the loss distribution in the middle
>>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>>> apart that. (I am so hopeless at json)
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>>> report back!
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>
>>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>>
>>>>>>>>> Jonathan Bennett
>>>>>>>>> Hackaday.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>>
>>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests
>>>>>>>>>> we
>>>>>>>>>> did:
>>>>>>>>>>
>>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>>> -o whatever.json
>>>>>>>>>>
>>>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>>
>>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>>> dallas,
>>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>>
>>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>>> here
>>>>>>>>>> somewhere (I have largely personally put down the starlink
>>>>>>>>>> project,
>>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>>> written
>>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>> >>
>>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>>> >> >
>>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>>>> puts me
>>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>>> >> >
>>>>>>>>>> >> >
>>>>>>>>>> >> > IP address: nix six.
>>>>>>>>>> >> >
>>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>>> >>
>>>>>>>>>> >> OK, so here we go.
>>>>>>>>>> >>
>>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>>> they're really
>>>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>>>> work
>>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>>> day finally
>>>>>>>>>> >> got around to set it up. It's on a roaming subscription
>>>>>>>>>> because our
>>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>>> view of the
>>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>>> adapter,
>>>>>>>>>> >> not the WiFi?
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>>>> Starlink
>>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>>> get from
>>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>>> There are few
>>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>>> Clevedon (which
>>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>>> and Puwera,
>>>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>>>> the crow
>>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>>> three can
>>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>>> southern
>>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>>> although that
>>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>>>> options
>>>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>>>> Easy: If
>>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>>> that the
>>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>>> same, so you
>>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>>> view for you
>>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much
>>>>>>>>>> any bird
>>>>>>>>>> >> above you will do.
>>>>>>>>>> >>
>>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>>> in my area
>>>>>>>>>> >> vying for one of the few available satellites that can see
>>>>>>>>>> both us and
>>>>>>>>>> >> the teleport, this is about as good as it gets at 37S
>>>>>>>>>> latitude. If I'd
>>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>>> >>
>>>>>>>>>> >> It'd be interesting to hear from Jonathan what the
>>>>>>>>>> availability of home
>>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>>> lower
>>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>>>> notice
>>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>>> bad. I also
>>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>>> moment -
>>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then
>>>>>>>>>> that means
>>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>>> whatever reason.
>>>>>>>>>> >
>>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>>> whatever I
>>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>>> 10 hops
>>>>>>>>>> >> between me and the University of Auckland via Starlink. That's
>>>>>>>>>> 10
>>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's
>>>>>>>>>> a bit
>>>>>>>>>> >> hard to tell where our packets really get to wait or where
>>>>>>>>>> they get
>>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>>> between 1
>>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>>> can all
>>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link
>>>>>>>>>> of a few
>>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>>> bottleneck in
>>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>>> whether a
>>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>>> system or
>>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>>> >>
>>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>>> latency
>>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>>>> along the
>>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>>> inevitably
>>>>>>>>>> >> associated with a change in RTT and in most cases also
>>>>>>>>>> available
>>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>>>> >> satellite while your TCP is still trying to respond to
>>>>>>>>>> whatever it
>>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>>> Whatever your
>>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what
>>>>>>>>>> it should be.
>>>>>>>>>> >>
>>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>>> Ookla
>>>>>>>>>> >> speedtest.net tests each to five local servers. Average
>>>>>>>>>> upload rate was
>>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>>> ISP that
>>>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>>>> Group)
>>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>>>> 52.22
>>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>>> rates
>>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>>> worst rates.
>>>>>>>>>> >>
>>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s
>>>>>>>>>> up and
>>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>>> correlating
>>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>>> >>
>>>>>>>>>> >> Note the variation though: More or less a factor of two
>>>>>>>>>> between highest
>>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky
>>>>>>>>>> in my
>>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>>> few tests
>>>>>>>>>> >> to tell.
>>>>>>>>>> >>
>>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>>> >>
>>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>>> closer to
>>>>>>>>>> >> the heavens. This seems to have translated into fewer outages
>>>>>>>>>> / ping
>>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>>> ground on
>>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call
>>>>>>>>>> with my folks
>>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>>> quality. But
>>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>>> wasn't in
>>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>>> >
>>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've
>>>>>>>>>> got my dishy mounted on the back fence, looking up over an empty field, so
>>>>>>>>>> it's pretty much best-case scenario here.
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> --
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> ****************************************************************
>>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>>> >>
>>>>>>>>>> >> School of Computer Science
>>>>>>>>>> >>
>>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>>> >>
>>>>>>>>>> >> The University of Auckland
>>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>>> >>
>>>>>>>>>> ****************************************************************
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> _______________________________________________
>>>>>>>>>> >> Starlink mailing list
>>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>> >
>>>>>>>>>> > _______________________________________________
>>>>>>>>>> > Starlink mailing list
>>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>>>
>>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>
>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>>
>
[-- Attachment #1.2: Type: text/html, Size: 22288 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #6: Screenshot 2023-01-14 at 6.12.53 AM.png --]
[-- Type: image/png, Size: 921914 bytes --]
[-- Attachment #7: Screenshot 2023-01-14 at 6.13.50 AM.png --]
[-- Type: image/png, Size: 337863 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
[not found] ` <CALjsLJv5cbfHfkxqHnbjxoVHczspYvxc_jrshzs1CpHLEDWyew@mail.gmail.com>
2023-01-14 14:20 ` Nathan Owens
@ 2023-01-14 15:52 ` Dave Taht
1 sibling, 0 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-14 15:52 UTC (permalink / raw)
To: Nathan Owens; +Cc: Jonathan Bennett, starlink, Sina Khanifar
[-- Attachment #1.1: Type: text/plain, Size: 18546 bytes --]
On Sat, Jan 14, 2023 at 6:14 AM Nathan Owens <nathan@nathan.io> wrote:
> I realized I goofed up my visualizations, here's all of them again:
> I see a ton of loss on all of these!
>
Lovely pics, thank you, and the sawtooths are obvious, for a change, so in
my mind they've cut the buffering back quite a bit since we last checked.
The amount of TCP related loss remains low, at least according to the
packet captures (not enough loss for tcp! a BDP should settle in at 40ms
with a fifo, not 160+!).
Kind of related to my "glitches per minute" metric described here:
https://blog.cerowrt.org/post/speedtests/
What I'm interested in is the distribution pattern of the irtt udp loss. A
way to plot that might be along the -1 vertical axis, but incremented down
one further for every consecutive loss, and dots, colored,
blue->green->yellow->orange->red - voip can survive 3/5 losses in a row
fairly handily. Two ways
+ axis as it is now
----------------------------------------got a packet
---------------------------------------------
B loss 1 loss 1
G 2 loss in a row loss 2
Y 3rd loss
los 3
O
4
R
5
R
....
R
R
R
and/or the same as above but plotting the rx vs tx loss
>
>
> While Luis was downloading/uploading... oof.
> [image: Screenshot 2023-01-14 at 6.13.50 AM.png]
>
> [image: Screenshot 2023-01-14 at 6.12.53 AM.png]
>
>
>
>
> On Fri, Jan 13, 2023 at 3:44 PM Dave Taht <dave.taht@gmail.com> wrote:
>
>> I am forced to conclude that waveform's upload test is broken in some
>> cases and has been for some time. All the joy many have been feeling about
>> their uplink speeds has to be cast away. Felt good, though, didn't it?
>>
>> There is a *slight* possibility that there is some fq in the starlink
>> network on tcp port 433. The first showing normal rtt growth and loss for
>> cubic, the second, a low rate flow that is holding the line... but I didn't
>> check to see if these were sequential or parallel.
>>
>> The last is a cwnd plot, clearly showing the cubic sawtooth on the
>> upload.
>>
>> It's weirdly nice to be able to follow a port 433 stream, see the tls
>> handshake put the website en'clair, and the rest go dark, and still trace
>> the behaviors.
>>
>> we're so going to lose this analytical ability with quic. I'm enjoying it
>> while we still can.
>>
>>
>> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> The irtt run finished a few seconds before the flent run, but here are
>>> the results:
>>>
>>>
>>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>>
>>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>>
>>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>>
>>>
>>> [image: image.png]
>>> [image: image.png]
>>>
>>>
>>> Jonathan Bennett
>>> Hackaday.com
>>>
>>>
>>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>>> luis.a.cornejo@gmail.com> wrote:
>>>>
>>>>> Dave,
>>>>>
>>>>> Here is a run the way I think you wanted it.
>>>>>
>>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>>
>>>>>
>>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>>
>>>>> -Luis
>>>>>
>>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>>
>>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>>
>>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>>> jitter.
>>>>>>>
>>>>>>
>>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>>> (need a name for this), in this case, doing a few speedtests
>>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>>> they are up to.
>>>>>>
>>>>>> It would also be my hope that the loss distribution in the middle
>>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>>> apart that. (I am so hopeless at json)
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>>> report back!
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>
>>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>>
>>>>>>>>> Jonathan Bennett
>>>>>>>>> Hackaday.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>>
>>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests
>>>>>>>>>> we
>>>>>>>>>> did:
>>>>>>>>>>
>>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>>> -o whatever.json
>>>>>>>>>>
>>>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>>
>>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>>> dallas,
>>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>>
>>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>>> here
>>>>>>>>>> somewhere (I have largely personally put down the starlink
>>>>>>>>>> project,
>>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>>> written
>>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>> >
>>>>>>>>>> >
>>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>> >>
>>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>>> >> >
>>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>>>> puts me
>>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>>> >> >
>>>>>>>>>> >> >
>>>>>>>>>> >> > IP address: nix six.
>>>>>>>>>> >> >
>>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>>> >>
>>>>>>>>>> >> OK, so here we go.
>>>>>>>>>> >>
>>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>>> they're really
>>>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>>>> work
>>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>>> day finally
>>>>>>>>>> >> got around to set it up. It's on a roaming subscription
>>>>>>>>>> because our
>>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>>> view of the
>>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>>> adapter,
>>>>>>>>>> >> not the WiFi?
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>>>> Starlink
>>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>>> get from
>>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>>> There are few
>>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>>> Clevedon (which
>>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>>> and Puwera,
>>>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>>>> the crow
>>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>>> three can
>>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>>> southern
>>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>>> although that
>>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>>>> options
>>>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>>>> Easy: If
>>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>>> that the
>>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>>> same, so you
>>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>>> view for you
>>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much
>>>>>>>>>> any bird
>>>>>>>>>> >> above you will do.
>>>>>>>>>> >>
>>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>>> in my area
>>>>>>>>>> >> vying for one of the few available satellites that can see
>>>>>>>>>> both us and
>>>>>>>>>> >> the teleport, this is about as good as it gets at 37S
>>>>>>>>>> latitude. If I'd
>>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>>> >>
>>>>>>>>>> >> It'd be interesting to hear from Jonathan what the
>>>>>>>>>> availability of home
>>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>>> lower
>>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>>>> notice
>>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>>> bad. I also
>>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>>> moment -
>>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then
>>>>>>>>>> that means
>>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>>> whatever reason.
>>>>>>>>>> >
>>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>>> whatever I
>>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>>> 10 hops
>>>>>>>>>> >> between me and the University of Auckland via Starlink. That's
>>>>>>>>>> 10
>>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's
>>>>>>>>>> a bit
>>>>>>>>>> >> hard to tell where our packets really get to wait or where
>>>>>>>>>> they get
>>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>>> between 1
>>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>>> can all
>>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link
>>>>>>>>>> of a few
>>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>>> bottleneck in
>>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>>> whether a
>>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>>> system or
>>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>>> >>
>>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>>> latency
>>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>>>> along the
>>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>>> >>
>>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>>> inevitably
>>>>>>>>>> >> associated with a change in RTT and in most cases also
>>>>>>>>>> available
>>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>>>> >> satellite while your TCP is still trying to respond to
>>>>>>>>>> whatever it
>>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>>> Whatever your
>>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what
>>>>>>>>>> it should be.
>>>>>>>>>> >>
>>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>>> Ookla
>>>>>>>>>> >> speedtest.net tests each to five local servers. Average
>>>>>>>>>> upload rate was
>>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>>> ISP that
>>>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>>>> Group)
>>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>>>> 52.22
>>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>>> rates
>>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>>> worst rates.
>>>>>>>>>> >>
>>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s
>>>>>>>>>> up and
>>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>>> correlating
>>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>>> >>
>>>>>>>>>> >> Note the variation though: More or less a factor of two
>>>>>>>>>> between highest
>>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky
>>>>>>>>>> in my
>>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>>> few tests
>>>>>>>>>> >> to tell.
>>>>>>>>>> >>
>>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>>> >>
>>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>>> closer to
>>>>>>>>>> >> the heavens. This seems to have translated into fewer outages
>>>>>>>>>> / ping
>>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>>> ground on
>>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call
>>>>>>>>>> with my folks
>>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>>> quality. But
>>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>>> wasn't in
>>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>>> >
>>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've
>>>>>>>>>> got my dishy mounted on the back fence, looking up over an empty field, so
>>>>>>>>>> it's pretty much best-case scenario here.
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> --
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> ****************************************************************
>>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>>> >>
>>>>>>>>>> >> School of Computer Science
>>>>>>>>>> >>
>>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>>> >>
>>>>>>>>>> >> The University of Auckland
>>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>>> >>
>>>>>>>>>> ****************************************************************
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> _______________________________________________
>>>>>>>>>> >> Starlink mailing list
>>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>> >
>>>>>>>>>> > _______________________________________________
>>>>>>>>>> > Starlink mailing list
>>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>>>
>>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>> --
>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>
>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>
>>
>> --
>> This song goes out to all the folk that thought Stadia would work:
>>
>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> Dave Täht CEO, TekLibre, LLC
>>
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 24676 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #6: Screenshot 2023-01-14 at 6.12.53 AM.png --]
[-- Type: image/png, Size: 921914 bytes --]
[-- Attachment #7: Screenshot 2023-01-14 at 6.13.50 AM.png --]
[-- Type: image/png, Size: 337863 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-14 14:20 ` Nathan Owens
@ 2023-01-14 15:53 ` Dave Taht
2023-01-14 16:33 ` Dave Taht
0 siblings, 1 reply; 49+ messages in thread
From: Dave Taht @ 2023-01-14 15:53 UTC (permalink / raw)
To: Nathan Owens; +Cc: Jonathan Bennett, starlink, Sina Khanifar
[-- Attachment #1.1: Type: text/plain, Size: 19271 bytes --]
Lovely pics, thank you, and the sawtooths are obvious, for a change, so in
my mind they've cut the buffering back quite a bit since we last checked.
The amount of TCP related loss remains low, at least according to the
packet captures (not enough loss for tcp! a BDP should settle in at 40ms
with a fifo, not 160+!).
Kind of related to my "glitches per minute" metric described here:
https://blog.cerowrt.org/post/speedtests/
What I'm interested in is the distribution pattern of the irtt udp loss. A
way to plot that might be along the -1 vertical axis, but incremented down
one further for every consecutive loss, and dots, colored,
blue->green->yellow->orange->red - voip can survive 3/5 losses in a row
fairly handily. Two ways
+ axis as it is now
----------------------------------------got a packet
---------------------------------------------
B loss 1 loss 1
G 2 loss in a row loss 2
Y 3rd loss
los 3
O
4
R
5
R
....
R
R
R
and/or the same as above but plotting the rx vs tx loss
On Sat, Jan 14, 2023 at 6:20 AM Nathan Owens <nathan@nathan.io> wrote:
> Sorry, was rejected from the listserv, here's the google drive link for
> all 3 IRTT's visualized
>
>
> https://drive.google.com/drive/folders/1SbUyvdlfdllgqrEcSSxvGdn3yN40vSx-?usp=share_link
>
> On Sat, Jan 14, 2023 at 6:14 AM Nathan Owens <nathan@nathan.io> wrote:
>
>> I realized I goofed up my visualizations, here's all of them again:
>> I see a ton of loss on all of these!
>>
>> While Luis was downloading/uploading... oof.
>> [image: Screenshot 2023-01-14 at 6.13.50 AM.png]
>>
>> [image: Screenshot 2023-01-14 at 6.12.53 AM.png]
>>
>>
>>
>>
>> On Fri, Jan 13, 2023 at 3:44 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>>> I am forced to conclude that waveform's upload test is broken in some
>>> cases and has been for some time. All the joy many have been feeling about
>>> their uplink speeds has to be cast away. Felt good, though, didn't it?
>>>
>>> There is a *slight* possibility that there is some fq in the starlink
>>> network on tcp port 433. The first showing normal rtt growth and loss for
>>> cubic, the second, a low rate flow that is holding the line... but I didn't
>>> check to see if these were sequential or parallel.
>>>
>>> The last is a cwnd plot, clearly showing the cubic sawtooth on the
>>> upload.
>>>
>>> It's weirdly nice to be able to follow a port 433 stream, see the tls
>>> handshake put the website en'clair, and the rest go dark, and still trace
>>> the behaviors.
>>>
>>> we're so going to lose this analytical ability with quic. I'm enjoying
>>> it while we still can.
>>>
>>>
>>> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> The irtt run finished a few seconds before the flent run, but here are
>>>> the results:
>>>>
>>>>
>>>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>>>
>>>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>>>
>>>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>>>
>>>>
>>>> [image: image.png]
>>>> [image: image.png]
>>>>
>>>>
>>>> Jonathan Bennett
>>>> Hackaday.com
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>>
>>>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>>>> luis.a.cornejo@gmail.com> wrote:
>>>>>
>>>>>> Dave,
>>>>>>
>>>>>> Here is a run the way I think you wanted it.
>>>>>>
>>>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>>>
>>>>>>
>>>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>>>
>>>>>> -Luis
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>>>
>>>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>>>
>>>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>>>> jitter.
>>>>>>>>
>>>>>>>
>>>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>>>> (need a name for this), in this case, doing a few speedtests
>>>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>>>> they are up to.
>>>>>>>
>>>>>>> It would also be my hope that the loss distribution in the middle
>>>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>>>> apart that. (I am so hopeless at json)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>>>> report back!
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>
>>>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>>>
>>>>>>>>>> Jonathan Bennett
>>>>>>>>>> Hackaday.com
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>>>
>>>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>>>> results nowadays to duplicate and re-plot the original irtt
>>>>>>>>>>> tests we
>>>>>>>>>>> did:
>>>>>>>>>>>
>>>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>>>> -o whatever.json
>>>>>>>>>>>
>>>>>>>>>>> They MUST have changed their scheduling to get such amazing
>>>>>>>>>>> uplink
>>>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>>>
>>>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>>>> dallas,
>>>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>>>
>>>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>>>> here
>>>>>>>>>>> somewhere (I have largely personally put down the starlink
>>>>>>>>>>> project,
>>>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>>>> written
>>>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>>> >
>>>>>>>>>>> >
>>>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>>> >>
>>>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription,
>>>>>>>>>>> it puts me
>>>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>>>> >> >
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > IP address: nix six.
>>>>>>>>>>> >> >
>>>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>>>> >>
>>>>>>>>>>> >> OK, so here we go.
>>>>>>>>>>> >>
>>>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>>>> they're really
>>>>>>>>>>> >> laden with so many caveats that it's not funny. I took our
>>>>>>>>>>> new work
>>>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>>>> day finally
>>>>>>>>>>> >> got around to set it up. It's on a roaming subscription
>>>>>>>>>>> because our
>>>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>>>> view of the
>>>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>>>> adapter,
>>>>>>>>>>> >> not the WiFi?
>>>>>>>>>>> >>
>>>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the
>>>>>>>>>>> best Starlink
>>>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>>>> get from
>>>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>>>> There are few
>>>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>>>> >>
>>>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>>>> Clevedon (which
>>>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>>>> and Puwera,
>>>>>>>>>>> >> the most distant of the three and about 130 km away from me
>>>>>>>>>>> as the crow
>>>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>>>> three can
>>>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>>>> southern
>>>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>>>> although that
>>>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty
>>>>>>>>>>> of options
>>>>>>>>>>> >> for my bits to travel without needing a lot of handovers.
>>>>>>>>>>> Why? Easy: If
>>>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>>>> that the
>>>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>>>> same, so you
>>>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>>>> view for you
>>>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much
>>>>>>>>>>> any bird
>>>>>>>>>>> >> above you will do.
>>>>>>>>>>> >>
>>>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>>>> in my area
>>>>>>>>>>> >> vying for one of the few available satellites that can see
>>>>>>>>>>> both us and
>>>>>>>>>>> >> the teleport, this is about as good as it gets at 37S
>>>>>>>>>>> latitude. If I'd
>>>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>>>> >>
>>>>>>>>>>> >> It'd be interesting to hear from Jonathan what the
>>>>>>>>>>> availability of home
>>>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>>>> lower
>>>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge.
>>>>>>>>>>> I notice
>>>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>>>> bad. I also
>>>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>>>> moment -
>>>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then
>>>>>>>>>>> that means
>>>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>>>> whatever reason.
>>>>>>>>>>> >
>>>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>>>> whatever I
>>>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>>>> 10 hops
>>>>>>>>>>> >> between me and the University of Auckland via Starlink.
>>>>>>>>>>> That's 10
>>>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So
>>>>>>>>>>> it's a bit
>>>>>>>>>>> >> hard to tell where our packets really get to wait or where
>>>>>>>>>>> they get
>>>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>>>> between 1
>>>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>>>> can all
>>>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link
>>>>>>>>>>> of a few
>>>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>>>> bottleneck in
>>>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>>>> whether a
>>>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>>>> system or
>>>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>>>> >>
>>>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>>>> latency
>>>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer
>>>>>>>>>>> here along the
>>>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>>>> >>
>>>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>>>> inevitably
>>>>>>>>>>> >> associated with a change in RTT and in most cases also
>>>>>>>>>>> available
>>>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a
>>>>>>>>>>> new
>>>>>>>>>>> >> satellite while your TCP is still trying to respond to
>>>>>>>>>>> whatever it
>>>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>>>> Whatever your
>>>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what
>>>>>>>>>>> it should be.
>>>>>>>>>>> >>
>>>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>>>> Ookla
>>>>>>>>>>> >> speedtest.net tests each to five local servers. Average
>>>>>>>>>>> upload rate was
>>>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>>>> ISP that
>>>>>>>>>>> >> Starlink seems to be buying its local connectivity from
>>>>>>>>>>> (Vocus Group)
>>>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33
>>>>>>>>>>> and 52.22
>>>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>>>> rates
>>>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>>>> worst rates.
>>>>>>>>>>> >>
>>>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s
>>>>>>>>>>> up and
>>>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>>>> correlating
>>>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>>>> >>
>>>>>>>>>>> >> Note the variation though: More or less a factor of two
>>>>>>>>>>> between highest
>>>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky
>>>>>>>>>>> in my
>>>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>>>> few tests
>>>>>>>>>>> >> to tell.
>>>>>>>>>>> >>
>>>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>>>> >>
>>>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>>>> closer to
>>>>>>>>>>> >> the heavens. This seems to have translated into fewer outages
>>>>>>>>>>> / ping
>>>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>>>> ground on
>>>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call
>>>>>>>>>>> with my folks
>>>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>>>> quality. But
>>>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>>>> wasn't in
>>>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>>>> >
>>>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've
>>>>>>>>>>> got my dishy mounted on the back fence, looking up over an empty field, so
>>>>>>>>>>> it's pretty much best-case scenario here.
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> --
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> ****************************************************************
>>>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>>>> >>
>>>>>>>>>>> >> School of Computer Science
>>>>>>>>>>> >>
>>>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>>>> >>
>>>>>>>>>>> >> The University of Auckland
>>>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>>>> >>
>>>>>>>>>>> ****************************************************************
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> _______________________________________________
>>>>>>>>>>> >> Starlink mailing list
>>>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>> >
>>>>>>>>>>> > _______________________________________________
>>>>>>>>>>> > Starlink mailing list
>>>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> This song goes out to all the folk that thought Stadia would
>>>>>>>>>>> work:
>>>>>>>>>>>
>>>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Starlink mailing list
>>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>
>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>
>>>
>>> --
>>> This song goes out to all the folk that thought Stadia would work:
>>>
>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>> Dave Täht CEO, TekLibre, LLC
>>>
>>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 25327 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #6: Screenshot 2023-01-14 at 6.12.53 AM.png --]
[-- Type: image/png, Size: 921914 bytes --]
[-- Attachment #7: Screenshot 2023-01-14 at 6.13.50 AM.png --]
[-- Type: image/png, Size: 337863 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] insanely great waveform result for starlink
2023-01-14 15:53 ` Dave Taht
@ 2023-01-14 16:33 ` Dave Taht
0 siblings, 0 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-14 16:33 UTC (permalink / raw)
To: Nathan Owens; +Cc: Jonathan Bennett, starlink, Sina Khanifar
[-- Attachment #1.1: Type: text/plain, Size: 20680 bytes --]
On Sat, Jan 14, 2023 at 7:53 AM Dave Taht <dave.taht@gmail.com> wrote:
> Lovely pics, thank you, and the sawtooths are obvious, for a change, so in
> my mind they've cut the buffering back quite a bit since we last checked.
> The amount of TCP related loss remains low, at least according to the
> packet captures (not enough loss for tcp! a BDP should settle in at 40ms
> with a fifo, not 160+!).
>
> Kind of related to my "glitches per minute" metric described here:
> https://blog.cerowrt.org/post/speedtests/
>
> What I'm interested in is the distribution pattern of the irtt udp loss. A
> way to plot that might be along the -1 vertical axis, but incremented down
> one further for every consecutive loss, and dots, colored,
> blue->green->yellow->orange->red - voip can survive 3/5 losses in a row
> fairly handily. Two ways
>
> + axis as it is now
> ----------------------------------------got a packet
> ---------------------------------------------
> B loss 1 loss 1
>
> G 2 loss in a row loss
> 2
> Y 3rd loss
> los 3
> O
> 4
> R
> 5
> R
> ....
> R
> R
> R
>
>
> and/or the same as above but plotting the rx vs tx loss
>
Now that I've had a little coffee, a single packet recovered after the loss
of X in a row won't return the typical voip codec to "good", so stepping
back a color, assuming we start at the 5th loss above, and then got 4 good
packets in a row would look like:
------------------------------------------------------------OYGB-------------------------
B loss 1
G loss 2
Y 3rd loss
O 4th loss
R 5th loss
R .
R
R
R
>
> On Sat, Jan 14, 2023 at 6:20 AM Nathan Owens <nathan@nathan.io> wrote:
>
>> Sorry, was rejected from the listserv, here's the google drive link for
>> all 3 IRTT's visualized
>>
>>
>> https://drive.google.com/drive/folders/1SbUyvdlfdllgqrEcSSxvGdn3yN40vSx-?usp=share_link
>>
>> On Sat, Jan 14, 2023 at 6:14 AM Nathan Owens <nathan@nathan.io> wrote:
>>
>>> I realized I goofed up my visualizations, here's all of them again:
>>> I see a ton of loss on all of these!
>>>
>>> While Luis was downloading/uploading... oof.
>>> [image: Screenshot 2023-01-14 at 6.13.50 AM.png]
>>>
>>> [image: Screenshot 2023-01-14 at 6.12.53 AM.png]
>>>
>>>
>>>
>>>
>>> On Fri, Jan 13, 2023 at 3:44 PM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>> I am forced to conclude that waveform's upload test is broken in some
>>>> cases and has been for some time. All the joy many have been feeling about
>>>> their uplink speeds has to be cast away. Felt good, though, didn't it?
>>>>
>>>> There is a *slight* possibility that there is some fq in the starlink
>>>> network on tcp port 433. The first showing normal rtt growth and loss for
>>>> cubic, the second, a low rate flow that is holding the line... but I didn't
>>>> check to see if these were sequential or parallel.
>>>>
>>>> The last is a cwnd plot, clearly showing the cubic sawtooth on the
>>>> upload.
>>>>
>>>> It's weirdly nice to be able to follow a port 433 stream, see the tls
>>>> handshake put the website en'clair, and the rest go dark, and still trace
>>>> the behaviors.
>>>>
>>>> we're so going to lose this analytical ability with quic. I'm enjoying
>>>> it while we still can.
>>>>
>>>>
>>>> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
>>>> starlink@lists.bufferbloat.net> wrote:
>>>>
>>>>> The irtt run finished a few seconds before the flent run, but here are
>>>>> the results:
>>>>>
>>>>>
>>>>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>>>>
>>>>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>>>>
>>>>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>>>>
>>>>>
>>>>> [image: image.png]
>>>>> [image: image.png]
>>>>>
>>>>>
>>>>> Jonathan Bennett
>>>>> Hackaday.com
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>
>>>>>> Here's Luis's run -- the top line below the edge of the graph is
>>>>>> 200ms
>>>>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>>>>> luis.a.cornejo@gmail.com> wrote:
>>>>>>
>>>>>>> Dave,
>>>>>>>
>>>>>>> Here is a run the way I think you wanted it.
>>>>>>>
>>>>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>>>>
>>>>>>>
>>>>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>>>>
>>>>>>> -Luis
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan@nathan.io>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>>>>
>>>>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>>>>
>>>>>>>>> You can see the path change at :12, :27, :42, :57 after the
>>>>>>>>> minute. Some paths are clearly busier than others with increased loss,
>>>>>>>>> latency, and jitter.
>>>>>>>>>
>>>>>>>>
>>>>>>>> I am so glad to see loss and bounded delay here. Also a bit of
>>>>>>>> rigor regarding what traffic was active locally vs on the path would be
>>>>>>>> nice, although it seems to line up with the known 15s starlink switchover
>>>>>>>> thing (need a name for this), in this case, doing a few speedtests
>>>>>>>> while that irtt is running would show the impact(s) of whatever
>>>>>>>> else they are up to.
>>>>>>>>
>>>>>>>> It would also be my hope that the loss distribution in the middle
>>>>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>>>>> apart that. (I am so hopeless at json)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan@nathan.io>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>>>>> report back!
>>>>>>>>>>
>>>>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>>
>>>>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>>>>
>>>>>>>>>>> Jonathan Bennett
>>>>>>>>>>> Hackaday.com
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>>>>
>>>>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>>>>> results nowadays to duplicate and re-plot the original irtt
>>>>>>>>>>>> tests we
>>>>>>>>>>>> did:
>>>>>>>>>>>>
>>>>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>>>>> -o whatever.json
>>>>>>>>>>>>
>>>>>>>>>>>> They MUST have changed their scheduling to get such amazing
>>>>>>>>>>>> uplink
>>>>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>>>>
>>>>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>>>>> dallas,
>>>>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>>>>
>>>>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>>>>> here
>>>>>>>>>>>> somewhere (I have largely personally put down the starlink
>>>>>>>>>>>> project,
>>>>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>>>>> written
>>>>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>>>> >
>>>>>>>>>>>> >
>>>>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>>>>> starlink@lists.bufferbloat.net> wrote:
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription,
>>>>>>>>>>>> it puts me
>>>>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > IP address: nix six.
>>>>>>>>>>>> >> >
>>>>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> OK, so here we go.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>>>>> they're really
>>>>>>>>>>>> >> laden with so many caveats that it's not funny. I took our
>>>>>>>>>>>> new work
>>>>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>>>>> day finally
>>>>>>>>>>>> >> got around to set it up. It's on a roaming subscription
>>>>>>>>>>>> because our
>>>>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>>>>> view of the
>>>>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink
>>>>>>>>>>>> Ethernet adapter,
>>>>>>>>>>>> >> not the WiFi?
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the
>>>>>>>>>>>> best Starlink
>>>>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>>>>> get from
>>>>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>>>>> There are few
>>>>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>>>>> Clevedon (which
>>>>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>>>>> and Puwera,
>>>>>>>>>>>> >> the most distant of the three and about 130 km away from me
>>>>>>>>>>>> as the crow
>>>>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of
>>>>>>>>>>>> these three can
>>>>>>>>>>>> >> see, and then depending on where I put it and how much of
>>>>>>>>>>>> the southern
>>>>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>>>>> although that
>>>>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty
>>>>>>>>>>>> of options
>>>>>>>>>>>> >> for my bits to travel without needing a lot of handovers.
>>>>>>>>>>>> Why? Easy: If
>>>>>>>>>>>> >> your nearest teleport is close by, then the set of
>>>>>>>>>>>> satellites that the
>>>>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>>>>> same, so you
>>>>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>>>>> view for you
>>>>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much
>>>>>>>>>>>> any bird
>>>>>>>>>>>> >> above you will do.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> And because I don't get a lot of competition from other
>>>>>>>>>>>> users in my area
>>>>>>>>>>>> >> vying for one of the few available satellites that can see
>>>>>>>>>>>> both us and
>>>>>>>>>>>> >> the teleport, this is about as good as it gets at 37S
>>>>>>>>>>>> latitude. If I'd
>>>>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> It'd be interesting to hear from Jonathan what the
>>>>>>>>>>>> availability of home
>>>>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>>>>> lower
>>>>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge.
>>>>>>>>>>>> I notice
>>>>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>>>>> bad. I also
>>>>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>>>>> moment -
>>>>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then
>>>>>>>>>>>> that means
>>>>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>>>>> whatever reason.
>>>>>>>>>>>> >
>>>>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me
>>>>>>>>>>>> and whatever I
>>>>>>>>>>>> >> talk to in terms of my communications. Traceroute shows
>>>>>>>>>>>> about 10 hops
>>>>>>>>>>>> >> between me and the University of Auckland via Starlink.
>>>>>>>>>>>> That's 10
>>>>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So
>>>>>>>>>>>> it's a bit
>>>>>>>>>>>> >> hard to tell where our packets really get to wait or where
>>>>>>>>>>>> they get
>>>>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will
>>>>>>>>>>>> be between 1
>>>>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic,
>>>>>>>>>>>> they can all
>>>>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link
>>>>>>>>>>>> of a few
>>>>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>>>>> bottleneck in
>>>>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to
>>>>>>>>>>>> tell whether a
>>>>>>>>>>>> >> drop in "speed" is due to a performance issue in the
>>>>>>>>>>>> Starlink system or
>>>>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>>>>> latency
>>>>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer
>>>>>>>>>>>> here along the
>>>>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>>>>> inevitably
>>>>>>>>>>>> >> associated with a change in RTT and in most cases also
>>>>>>>>>>>> available
>>>>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a
>>>>>>>>>>>> new
>>>>>>>>>>>> >> satellite while your TCP is still trying to respond to
>>>>>>>>>>>> whatever it
>>>>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>>>>> Whatever your
>>>>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what
>>>>>>>>>>>> it should be.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>>>>> Ookla
>>>>>>>>>>>> >> speedtest.net tests each to five local servers. Average
>>>>>>>>>>>> upload rate was
>>>>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>>>>> ISP that
>>>>>>>>>>>> >> Starlink seems to be buying its local connectivity from
>>>>>>>>>>>> (Vocus Group)
>>>>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33
>>>>>>>>>>>> and 52.22
>>>>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>>>>> rates
>>>>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>>>>> worst rates.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s
>>>>>>>>>>>> up and
>>>>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>>>>> correlating
>>>>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Note the variation though: More or less a factor of two
>>>>>>>>>>>> between highest
>>>>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky
>>>>>>>>>>>> in my
>>>>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>>>>> few tests
>>>>>>>>>>>> >> to tell.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>>>>> closer to
>>>>>>>>>>>> >> the heavens. This seems to have translated into fewer
>>>>>>>>>>>> outages / ping
>>>>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>>>>> ground on
>>>>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call
>>>>>>>>>>>> with my folks
>>>>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>>>>> quality. But
>>>>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>>>>> wasn't in
>>>>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>>>>> >
>>>>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've
>>>>>>>>>>>> got my dishy mounted on the back fence, looking up over an empty field, so
>>>>>>>>>>>> it's pretty much best-case scenario here.
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> --
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> ****************************************************************
>>>>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> School of Computer Science
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> The University of Auckland
>>>>>>>>>>>> >> u.speidel@auckland.ac.nz
>>>>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>>>>> >>
>>>>>>>>>>>> ****************************************************************
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> _______________________________________________
>>>>>>>>>>>> >> Starlink mailing list
>>>>>>>>>>>> >> Starlink@lists.bufferbloat.net
>>>>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>> >
>>>>>>>>>>>> > _______________________________________________
>>>>>>>>>>>> > Starlink mailing list
>>>>>>>>>>>> > Starlink@lists.bufferbloat.net
>>>>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> This song goes out to all the folk that thought Stadia would
>>>>>>>>>>>> work:
>>>>>>>>>>>>
>>>>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>
>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>
>>>>
>>>> --
>>>> This song goes out to all the folk that thought Stadia would work:
>>>>
>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>> Dave Täht CEO, TekLibre, LLC
>>>>
>>>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
>
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
[-- Attachment #1.2: Type: text/html, Size: 27311 bytes --]
[-- Attachment #2: Screenshot 2023-01-13 at 12.29.15 PM.png --]
[-- Type: image/png, Size: 1380606 bytes --]
[-- Attachment #3: Screenshot 2023-01-13 at 1.30.03 PM.png --]
[-- Type: image/png, Size: 855840 bytes --]
[-- Attachment #4: image.png --]
[-- Type: image/png, Size: 296989 bytes --]
[-- Attachment #5: image.png --]
[-- Type: image/png, Size: 222655 bytes --]
[-- Attachment #6: Screenshot 2023-01-14 at 6.12.53 AM.png --]
[-- Type: image/png, Size: 921914 bytes --]
[-- Attachment #7: Screenshot 2023-01-14 at 6.13.50 AM.png --]
[-- Type: image/png, Size: 337863 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-05 21:19 ` Christoph Paasch
2023-01-05 22:01 ` Dave Taht
@ 2023-01-05 22:09 ` Sebastian Moeller
1 sibling, 0 replies; 49+ messages in thread
From: Sebastian Moeller @ 2023-01-05 22:09 UTC (permalink / raw)
To: Christoph Paasch; +Cc: David P. Reed, Dave Taht via Starlink
Hi Christoph,
> On Jan 5, 2023, at 22:19, Christoph Paasch via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> I think, the cloudflare test uses a single TCP connection, while Ookla/Fast/… use multiple connections.
Indeed, cloudflare's test seems to use a single preferably IPv6 connection, while Ookla and fast default to multiple parallel connections, but both can be configured for single flow tests (while cloudflare's so far seems to be fixed on single flow). IMHO both single and multi-flow tests are helpful (but should be clearly report the number of flows actually used).
Packet captures are certainly helpful, but for a quick check something like `iftop -i wan` on an OpenWrt router already helps confirming the "flow-ness" of a test on-line.
Regards
Sebastian
>
> That’s probably the difference for you.
>
>
> Christoph
>
>> On Jan 4, 2023, at 2:21 PM, David P. Reed via Starlink <starlink@lists.bufferbloat.net> wrote:
>>
>> I don't know how to debug this, but the cloudflare speed test really sucks on my home wired network, compared to others. And also I discovered to my chagrin that the DSLReports speedtest is now broken, at least in my Chrome browser.
>>
>> The speeds measured by Cloudflare are essentially 1/2 of both Ookla and Fast.com.
>>
>> Cloudflare in my test config gives ~500/10 Mb/sec, with a download latency of 14.6 msec, upload latency of 11.6.
>>
>> Oookla and Fast give ~980/25 Mb/sec, with "latency" being about 11 or 12 msec.
>>
>> This difference is observed over a tuned "cake" install on my Linux router, and the home network is 10 GigE to my workstation from the router, and the router talks to my DOCSIS cable modem on RCN at 1 GigE, with RCN's product offering being its "Gig" product.
>>
>> Now, I hate using Ookla and getting bombarded with ads! I don't really trust fast.com, but it used to be OK.
>>
>> The real disappointment was that the DSL Reports speed test, which I used to recommend is so broken. It claims it can't reach any of its 3 selected test sites, because I may have an "alien script" and suggests my DNS might be "slow" or my Chrome might have browser malware.
>>
>> Now, maybe Chrome has browser malware on my machine. This is troubling to a serious degree to me, and I will be investigating.
>>
>> However, Cloudflare seems to be somewhat flaky to a significant degree. (It also doesn't seem to push a fast network connection nearly hard enough to measure lag under load, it seems to me).
>>
>>
>>
>> On Wednesday, January 4, 2023 2:20pm, starlink-request@lists.bufferbloat.net said:
>>
>> > Send Starlink mailing list submissions to
>> > starlink@lists.bufferbloat.net
>> >
>> > To subscribe or unsubscribe via the World Wide Web, visit
>> > https://lists.bufferbloat.net/listinfo/starlink
>> > or, via email, send a message with subject or body 'help' to
>> > starlink-request@lists.bufferbloat.net
>> >
>> > You can reach the person managing the list at
>> > starlink-owner@lists.bufferbloat.net
>> >
>> > When replying, please edit your Subject line so it is more specific
>> > than "Re: Contents of Starlink digest..."
>> >
>> >
>> > Today's Topics:
>> >
>> > 1. Re: [Rpm] the grinch meets cloudflare's christmas present
>> > (jf@jonathanfoulkes.com)
>> >
>> >
>> > ----------------------------------------------------------------------
>> >
>> > Message: 1
>> > Date: Wed, 4 Jan 2023 14:20:15 -0500
>> > From: "jf@jonathanfoulkes.com" <jf@jonathanfoulkes.com>
>> > To: Dave Taht <dave.taht@gmail.com>
>> > Cc: bloat <bloat@lists.bufferbloat.net>, libreqos
>> > <libreqos@lists.bufferbloat.net>, Cake List
>> > <cake@lists.bufferbloat.net>, Dave Taht via Starlink
>> > <starlink@lists.bufferbloat.net>, Rpm <rpm@lists.bufferbloat.net>,
>> > IETF IPPM WG <ippm@ietf.org>
>> > Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
>> > present
>> > Message-ID: <845161E4-474C-44A9-92D4-1702748A3DA1@jonathanfoulkes.com>
>> > Content-Type: text/plain; charset="utf-8"
>> >
>> > HNY Dave and all the rest,
>> >
>> > Great to see yet another capacity test add latency metrics to the results. This
>> > one looks like a good start.
>> >
>> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0)
>> > Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86) with Cake
>> > set for 710/31 as this ISP can’t deliver reliable low-latency unless you
>> > shave a good bit off the targets. My local loop is pretty congested.
>> >
>> > Here’s the latest Cloudflare test:
>> >
>> > -------------- next part --------------
>> > A non-text attachment was scrubbed...
>> > Name: CFSpeedTest_Gig35_20230104.png
>> > Type: image/png
>> > Size: 379539 bytes
>> > Desc: not available
>> > URL:
>> > <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment.png>
>> > -------------- next part --------------
>> >
>> >
>> > And an Ookla test run just afterward:
>> >
>> > -------------- next part --------------
>> > A non-text attachment was scrubbed...
>> > Name: Speedtest_net_Gig35_20230104.png
>> > Type: image/png
>> > Size: 40589 bytes
>> > Desc: not available
>> > URL:
>> > <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment-0001.png>
>> > -------------- next part --------------
>> >
>> >
>> > They are definitely both in the ballpark and correspond to other tests run from
>> > the router itself or my (wired) MacBook Pro.
>> >
>> > Cheers,
>> >
>> > Jonathan
>> >
>> >
>> > > On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
>> > <rpm@lists.bufferbloat.net> wrote:
>> > >
>> > > Please try the new, the shiny, the really wonderful test here:
>> > > https://speed.cloudflare.com/
>> > >
>> > > I would really appreciate some independent verification of
>> > > measurements using this tool. In my brief experiments it appears - as
>> > > all the commercial tools to date - to dramatically understate the
>> > > bufferbloat, on my LTE, (and my starlink terminal is out being
>> > > hacked^H^H^H^H^H^Hworked on, so I can't measure that)
>> > >
>> > > My test of their test reports 223ms 5G latency under load , where
>> > > flent reports over 2seconds. See comparison attached.
>> > >
>> > > My guess is that this otherwise lovely new tool, like too many,
>> > > doesn't run for long enough. Admittedly, most web objects (their
>> > > target market) are small, and so long as they remain small and not
>> > > heavily pipelined this test is a very good start... but I'm pretty
>> > > sure cloudflare is used for bigger uploads and downloads than that.
>> > > There's no way to change the test to run longer either.
>> > >
>> > > I'd love to get some results from other networks (compared as usual to
>> > > flent), especially ones with cake on it. I'd love to know if they
>> > > measured more minimum rtts that can be obtained with fq_codel or cake,
>> > > correctly.
>> > >
>> > > Love Always,
>> > > The Grinch
>> > >
>> > > --
>> > > This song goes out to all the folk that thought Stadia would work:
>> > >
>> > https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>> > > Dave Täht CEO, TekLibre, LLC
>> > >
>> > <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
>> > > Rpm mailing list
>> > > Rpm@lists.bufferbloat.net
>> > > https://lists.bufferbloat.net/listinfo/rpm
>> >
>> >
>> > ------------------------------
>> >
>> > Subject: Digest Footer
>> >
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>> >
>> >
>> > ------------------------------
>> >
>> > End of Starlink Digest, Vol 22, Issue 10
>> > ****************************************
>> >
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-05 21:19 ` Christoph Paasch
@ 2023-01-05 22:01 ` Dave Taht
2023-01-05 22:09 ` Sebastian Moeller
1 sibling, 0 replies; 49+ messages in thread
From: Dave Taht @ 2023-01-05 22:01 UTC (permalink / raw)
To: Christoph Paasch; +Cc: David P. Reed, Dave Taht via Starlink
I am going to start heavily recommending more folk take packet captures.
On Thu, Jan 5, 2023 at 1:19 PM Christoph Paasch via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> I think, the cloudflare test uses a single TCP connection, while Ookla/Fast/… use multiple connections.
>
> That’s probably the difference for you.
>
>
> Christoph
>
> On Jan 4, 2023, at 2:21 PM, David P. Reed via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> I don't know how to debug this, but the cloudflare speed test really sucks on my home wired network, compared to others. And also I discovered to my chagrin that the DSLReports speedtest is now broken, at least in my Chrome browser.
>
>
>
> The speeds measured by Cloudflare are essentially 1/2 of both Ookla and Fast.com.
>
>
>
> Cloudflare in my test config gives ~500/10 Mb/sec, with a download latency of 14.6 msec, upload latency of 11.6.
>
>
>
> Oookla and Fast give ~980/25 Mb/sec, with "latency" being about 11 or 12 msec.
>
>
>
> This difference is observed over a tuned "cake" install on my Linux router, and the home network is 10 GigE to my workstation from the router, and the router talks to my DOCSIS cable modem on RCN at 1 GigE, with RCN's product offering being its "Gig" product.
>
>
>
> Now, I hate using Ookla and getting bombarded with ads! I don't really trust fast.com, but it used to be OK.
>
>
>
> The real disappointment was that the DSL Reports speed test, which I used to recommend is so broken. It claims it can't reach any of its 3 selected test sites, because I may have an "alien script" and suggests my DNS might be "slow" or my Chrome might have browser malware.
>
>
>
> Now, maybe Chrome has browser malware on my machine. This is troubling to a serious degree to me, and I will be investigating.
>
>
>
> However, Cloudflare seems to be somewhat flaky to a significant degree. (It also doesn't seem to push a fast network connection nearly hard enough to measure lag under load, it seems to me).
>
>
>
>
>
>
>
> On Wednesday, January 4, 2023 2:20pm, starlink-request@lists.bufferbloat.net said:
>
> > Send Starlink mailing list submissions to
> > starlink@lists.bufferbloat.net
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > https://lists.bufferbloat.net/listinfo/starlink
> > or, via email, send a message with subject or body 'help' to
> > starlink-request@lists.bufferbloat.net
> >
> > You can reach the person managing the list at
> > starlink-owner@lists.bufferbloat.net
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Starlink digest..."
> >
> >
> > Today's Topics:
> >
> > 1. Re: [Rpm] the grinch meets cloudflare's christmas present
> > (jf@jonathanfoulkes.com)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Wed, 4 Jan 2023 14:20:15 -0500
> > From: "jf@jonathanfoulkes.com" <jf@jonathanfoulkes.com>
> > To: Dave Taht <dave.taht@gmail.com>
> > Cc: bloat <bloat@lists.bufferbloat.net>, libreqos
> > <libreqos@lists.bufferbloat.net>, Cake List
> > <cake@lists.bufferbloat.net>, Dave Taht via Starlink
> > <starlink@lists.bufferbloat.net>, Rpm <rpm@lists.bufferbloat.net>,
> > IETF IPPM WG <ippm@ietf.org>
> > Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
> > present
> > Message-ID: <845161E4-474C-44A9-92D4-1702748A3DA1@jonathanfoulkes.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > HNY Dave and all the rest,
> >
> > Great to see yet another capacity test add latency metrics to the results. This
> > one looks like a good start.
> >
> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0)
> > Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86) with Cake
> > set for 710/31 as this ISP can’t deliver reliable low-latency unless you
> > shave a good bit off the targets. My local loop is pretty congested.
> >
> > Here’s the latest Cloudflare test:
> >
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: CFSpeedTest_Gig35_20230104.png
> > Type: image/png
> > Size: 379539 bytes
> > Desc: not available
> > URL:
> > <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment.png>
> > -------------- next part --------------
> >
> >
> > And an Ookla test run just afterward:
> >
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: Speedtest_net_Gig35_20230104.png
> > Type: image/png
> > Size: 40589 bytes
> > Desc: not available
> > URL:
> > <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment-0001.png>
> > -------------- next part --------------
> >
> >
> > They are definitely both in the ballpark and correspond to other tests run from
> > the router itself or my (wired) MacBook Pro.
> >
> > Cheers,
> >
> > Jonathan
> >
> >
> > > On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
> > <rpm@lists.bufferbloat.net> wrote:
> > >
> > > Please try the new, the shiny, the really wonderful test here:
> > > https://speed.cloudflare.com/
> > >
> > > I would really appreciate some independent verification of
> > > measurements using this tool. In my brief experiments it appears - as
> > > all the commercial tools to date - to dramatically understate the
> > > bufferbloat, on my LTE, (and my starlink terminal is out being
> > > hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> > >
> > > My test of their test reports 223ms 5G latency under load , where
> > > flent reports over 2seconds. See comparison attached.
> > >
> > > My guess is that this otherwise lovely new tool, like too many,
> > > doesn't run for long enough. Admittedly, most web objects (their
> > > target market) are small, and so long as they remain small and not
> > > heavily pipelined this test is a very good start... but I'm pretty
> > > sure cloudflare is used for bigger uploads and downloads than that.
> > > There's no way to change the test to run longer either.
> > >
> > > I'd love to get some results from other networks (compared as usual to
> > > flent), especially ones with cake on it. I'd love to know if they
> > > measured more minimum rtts that can be obtained with fq_codel or cake,
> > > correctly.
> > >
> > > Love Always,
> > > The Grinch
> > >
> > > --
> > > This song goes out to all the folk that thought Stadia would work:
> > >
> > https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> > > Dave Täht CEO, TekLibre, LLC
> > >
> > <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> > > Rpm mailing list
> > > Rpm@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/rpm
> >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
> >
> > ------------------------------
> >
> > End of Starlink Digest, Vol 22, Issue 10
> > ****************************************
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
2023-01-04 22:21 ` [Starlink] [Rpm] the grinch meets cloudflare's christmas present David P. Reed
@ 2023-01-05 21:19 ` Christoph Paasch
2023-01-05 22:01 ` Dave Taht
2023-01-05 22:09 ` Sebastian Moeller
0 siblings, 2 replies; 49+ messages in thread
From: Christoph Paasch @ 2023-01-05 21:19 UTC (permalink / raw)
To: David P. Reed; +Cc: Dave Taht via Starlink
[-- Attachment #1: Type: text/plain, Size: 7306 bytes --]
I think, the cloudflare test uses a single TCP connection, while Ookla/Fast/… use multiple connections.
That’s probably the difference for you.
Christoph
> On Jan 4, 2023, at 2:21 PM, David P. Reed via Starlink <starlink@lists.bufferbloat.net> wrote:
>
> I don't know how to debug this, but the cloudflare speed test really sucks on my home wired network, compared to others. And also I discovered to my chagrin that the DSLReports speedtest is now broken, at least in my Chrome browser.
>
> The speeds measured by Cloudflare are essentially 1/2 of both Ookla and Fast.com.
>
> Cloudflare in my test config gives ~500/10 Mb/sec, with a download latency of 14.6 msec, upload latency of 11.6.
>
> Oookla and Fast give ~980/25 Mb/sec, with "latency" being about 11 or 12 msec.
>
> This difference is observed over a tuned "cake" install on my Linux router, and the home network is 10 GigE to my workstation from the router, and the router talks to my DOCSIS cable modem on RCN at 1 GigE, with RCN's product offering being its "Gig" product.
>
> Now, I hate using Ookla and getting bombarded with ads! I don't really trust fast.com, but it used to be OK.
>
> The real disappointment was that the DSL Reports speed test, which I used to recommend is so broken. It claims it can't reach any of its 3 selected test sites, because I may have an "alien script" and suggests my DNS might be "slow" or my Chrome might have browser malware.
>
> Now, maybe Chrome has browser malware on my machine. This is troubling to a serious degree to me, and I will be investigating.
>
> However, Cloudflare seems to be somewhat flaky to a significant degree. (It also doesn't seem to push a fast network connection nearly hard enough to measure lag under load, it seems to me).
>
>
>
> On Wednesday, January 4, 2023 2:20pm, starlink-request@lists.bufferbloat.net said:
>
> > Send Starlink mailing list submissions to
> > starlink@lists.bufferbloat.net
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > https://lists.bufferbloat.net/listinfo/starlink
> > or, via email, send a message with subject or body 'help' to
> > starlink-request@lists.bufferbloat.net
> >
> > You can reach the person managing the list at
> > starlink-owner@lists.bufferbloat.net
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Starlink digest..."
> >
> >
> > Today's Topics:
> >
> > 1. Re: [Rpm] the grinch meets cloudflare's christmas present
> > (jf@jonathanfoulkes.com)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Wed, 4 Jan 2023 14:20:15 -0500
> > From: "jf@jonathanfoulkes.com" <jf@jonathanfoulkes.com>
> > To: Dave Taht <dave.taht@gmail.com>
> > Cc: bloat <bloat@lists.bufferbloat.net>, libreqos
> > <libreqos@lists.bufferbloat.net>, Cake List
> > <cake@lists.bufferbloat.net>, Dave Taht via Starlink
> > <starlink@lists.bufferbloat.net>, Rpm <rpm@lists.bufferbloat.net>,
> > IETF IPPM WG <ippm@ietf.org>
> > Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
> > present
> > Message-ID: <845161E4-474C-44A9-92D4-1702748A3DA1@jonathanfoulkes.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > HNY Dave and all the rest,
> >
> > Great to see yet another capacity test add latency metrics to the results. This
> > one looks like a good start.
> >
> > Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0)
> > Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86) with Cake
> > set for 710/31 as this ISP can’t deliver reliable low-latency unless you
> > shave a good bit off the targets. My local loop is pretty congested.
> >
> > Here’s the latest Cloudflare test:
> >
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: CFSpeedTest_Gig35_20230104.png
> > Type: image/png
> > Size: 379539 bytes
> > Desc: not available
> > URL:
> > <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment.png>
> > -------------- next part --------------
> >
> >
> > And an Ookla test run just afterward:
> >
> > -------------- next part --------------
> > A non-text attachment was scrubbed...
> > Name: Speedtest_net_Gig35_20230104.png
> > Type: image/png
> > Size: 40589 bytes
> > Desc: not available
> > URL:
> > <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment-0001.png>
> > -------------- next part --------------
> >
> >
> > They are definitely both in the ballpark and correspond to other tests run from
> > the router itself or my (wired) MacBook Pro.
> >
> > Cheers,
> >
> > Jonathan
> >
> >
> > > On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
> > <rpm@lists.bufferbloat.net> wrote:
> > >
> > > Please try the new, the shiny, the really wonderful test here:
> > > https://speed.cloudflare.com/
> > >
> > > I would really appreciate some independent verification of
> > > measurements using this tool. In my brief experiments it appears - as
> > > all the commercial tools to date - to dramatically understate the
> > > bufferbloat, on my LTE, (and my starlink terminal is out being
> > > hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> > >
> > > My test of their test reports 223ms 5G latency under load , where
> > > flent reports over 2seconds. See comparison attached.
> > >
> > > My guess is that this otherwise lovely new tool, like too many,
> > > doesn't run for long enough. Admittedly, most web objects (their
> > > target market) are small, and so long as they remain small and not
> > > heavily pipelined this test is a very good start... but I'm pretty
> > > sure cloudflare is used for bigger uploads and downloads than that.
> > > There's no way to change the test to run longer either.
> > >
> > > I'd love to get some results from other networks (compared as usual to
> > > flent), especially ones with cake on it. I'd love to know if they
> > > measured more minimum rtts that can be obtained with fq_codel or cake,
> > > correctly.
> > >
> > > Love Always,
> > > The Grinch
> > >
> > > --
> > > This song goes out to all the folk that thought Stadia would work:
> > >
> > https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> > > Dave Täht CEO, TekLibre, LLC
> > >
> > <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> > > Rpm mailing list
> > > Rpm@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/rpm
> >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
> >
> >
> > ------------------------------
> >
> > End of Starlink Digest, Vol 22, Issue 10
> > ****************************************
> >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 10618 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
@ 2023-01-05 10:07 David Fernández
0 siblings, 0 replies; 49+ messages in thread
From: David Fernández @ 2023-01-05 10:07 UTC (permalink / raw)
To: starlink
When a user refers to "gigs" it can be also talking about the monthly
data cap volume in the cell phone or mobile subscription.
A user is mainly not aware of what can be done with the "gigs". You
understand what you get when you get minutes of phone calls or a
certain number of SMS, but "gigs" are consumed in a way the user
mostly does not understand. Sometimes it is translated to a volume of
WhatsApp messages, pictures, videos (in a certain resolution, DVD
quality, for example), so people understand a bit what they are paying
for.
Language is ambiguous. Data rate in bit/s is referred as communication
channel capacity by Shannon, not to speed, but speed in km/h is an
analogy for information "running" in bit/s that people maybe
understand better.
Then, there is the people that ask you how much a file "weights" to
refer to the amount of bytes it has, as if information has a kind of
inertial mass.
There is the technical language and then the non-technical people
language. Two worlds apart, sometimes.
Regards,
David
> Date: Wed, 4 Jan 2023 19:11:28 -0800
> From: "Dick Roy" <dickroy@alum.mit.edu>
> To: "'Dave Collier-Brown'" <dave.collier-Brown@indexexchange.com>
> Cc: <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
> present
> Message-ID: <15EBCC5BF2474AAB82C050259229B5FB@SRA6>
> Content-Type: text/plain; charset="iso-8859-1"
>
>
>
>
>
> _____
>
> From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
> Dave Collier-Brown via Starlink
> Sent: Wednesday, January 4, 2023 6:48 PM
> To: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
> present
>
>
>
> I think using "speed" for "the inverse of delay" is pretty normal English,
> if technically erroneous when speaking nerd or physicist.
>
> [RR] I’ve not heard of that usage before. The units aren’t commensurate
> either.
>
> Using it for volume? Arguably more like fraudulent...
>
> [RR] I don’t think that was Bob’s intent. I think “load volume” was meant
> to be a metaphor for “number of bits/bytes” being transported (“by the
> semi”).
>
> That said, aren’t users these days educated on “gigs” which they intuitively
> understand to be Gigabits per second (or Gbps)? Oddly enough, that is an
> expression of “data/information/communication rate” in the appropriate units
> with the nominal technically correct meaning.
>
> RR
>
> --dave
>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas present
[not found] <mailman.2678.1672860038.1281.starlink@lists.bufferbloat.net>
@ 2023-01-04 22:21 ` David P. Reed
2023-01-05 21:19 ` Christoph Paasch
0 siblings, 1 reply; 49+ messages in thread
From: David P. Reed @ 2023-01-04 22:21 UTC (permalink / raw)
To: starlink
[-- Attachment #1: Type: text/plain, Size: 6557 bytes --]
I don't know how to debug this, but the cloudflare speed test really sucks on my home wired network, compared to others. And also I discovered to my chagrin that the DSLReports speedtest is now broken, at least in my Chrome browser.
The speeds measured by Cloudflare are essentially 1/2 of both Ookla and Fast.com.
Cloudflare in my test config gives ~500/10 Mb/sec, with a download latency of 14.6 msec, upload latency of 11.6.
Oookla and Fast give ~980/25 Mb/sec, with "latency" being about 11 or 12 msec.
This difference is observed over a tuned "cake" install on my Linux router, and the home network is 10 GigE to my workstation from the router, and the router talks to my DOCSIS cable modem on RCN at 1 GigE, with RCN's product offering being its "Gig" product.
Now, I hate using Ookla and getting bombarded with ads! I don't really trust fast.com, but it used to be OK.
The real disappointment was that the DSL Reports speed test, which I used to recommend is so broken. It claims it can't reach any of its 3 selected test sites, because I may have an "alien script" and suggests my DNS might be "slow" or my Chrome might have browser malware.
Now, maybe Chrome has browser malware on my machine. This is troubling to a serious degree to me, and I will be investigating.
However, Cloudflare seems to be somewhat flaky to a significant degree. (It also doesn't seem to push a fast network connection nearly hard enough to measure lag under load, it seems to me).
On Wednesday, January 4, 2023 2:20pm, starlink-request@lists.bufferbloat.net said:
> Send Starlink mailing list submissions to
> starlink@lists.bufferbloat.net
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.bufferbloat.net/listinfo/starlink
> or, via email, send a message with subject or body 'help' to
> starlink-request@lists.bufferbloat.net
>
> You can reach the person managing the list at
> starlink-owner@lists.bufferbloat.net
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Starlink digest..."
>
>
> Today's Topics:
>
> 1. Re: [Rpm] the grinch meets cloudflare's christmas present
> (jf@jonathanfoulkes.com)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 4 Jan 2023 14:20:15 -0500
> From: "jf@jonathanfoulkes.com" <jf@jonathanfoulkes.com>
> To: Dave Taht <dave.taht@gmail.com>
> Cc: bloat <bloat@lists.bufferbloat.net>, libreqos
> <libreqos@lists.bufferbloat.net>, Cake List
> <cake@lists.bufferbloat.net>, Dave Taht via Starlink
> <starlink@lists.bufferbloat.net>, Rpm <rpm@lists.bufferbloat.net>,
> IETF IPPM WG <ippm@ietf.org>
> Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
> present
> Message-ID: <845161E4-474C-44A9-92D4-1702748A3DA1@jonathanfoulkes.com>
> Content-Type: text/plain; charset="utf-8"
>
> HNY Dave and all the rest,
>
> Great to see yet another capacity test add latency metrics to the results. This
> one looks like a good start.
>
> Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up is 3.0)
> Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro (an i5 x86) with Cake
> set for 710/31 as this ISP can’t deliver reliable low-latency unless you
> shave a good bit off the targets. My local loop is pretty congested.
>
> Here’s the latest Cloudflare test:
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: CFSpeedTest_Gig35_20230104.png
> Type: image/png
> Size: 379539 bytes
> Desc: not available
> URL:
> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment.png>
> -------------- next part --------------
>
>
> And an Ookla test run just afterward:
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: Speedtest_net_Gig35_20230104.png
> Type: image/png
> Size: 40589 bytes
> Desc: not available
> URL:
> <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230104/d89867fe/attachment-0001.png>
> -------------- next part --------------
>
>
> They are definitely both in the ballpark and correspond to other tests run from
> the router itself or my (wired) MacBook Pro.
>
> Cheers,
>
> Jonathan
>
>
> > On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
> <rpm@lists.bufferbloat.net> wrote:
> >
> > Please try the new, the shiny, the really wonderful test here:
> > https://speed.cloudflare.com/
> >
> > I would really appreciate some independent verification of
> > measurements using this tool. In my brief experiments it appears - as
> > all the commercial tools to date - to dramatically understate the
> > bufferbloat, on my LTE, (and my starlink terminal is out being
> > hacked^H^H^H^H^H^Hworked on, so I can't measure that)
> >
> > My test of their test reports 223ms 5G latency under load , where
> > flent reports over 2seconds. See comparison attached.
> >
> > My guess is that this otherwise lovely new tool, like too many,
> > doesn't run for long enough. Admittedly, most web objects (their
> > target market) are small, and so long as they remain small and not
> > heavily pipelined this test is a very good start... but I'm pretty
> > sure cloudflare is used for bigger uploads and downloads than that.
> > There's no way to change the test to run longer either.
> >
> > I'd love to get some results from other networks (compared as usual to
> > flent), especially ones with cake on it. I'd love to know if they
> > measured more minimum rtts that can be obtained with fq_codel or cake,
> > correctly.
> >
> > Love Always,
> > The Grinch
> >
> > --
> > This song goes out to all the folk that thought Stadia would work:
> >
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> > Dave Täht CEO, TekLibre, LLC
> >
> <image.png><tcp_nup-2023-01-04T090937.211620.LTE.flent.gz>_______________________________________________
> > Rpm mailing list
> > Rpm@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/rpm
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> ------------------------------
>
> End of Starlink Digest, Vol 22, Issue 10
> ****************************************
>
[-- Attachment #2: Type: text/html, Size: 10005 bytes --]
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2023-01-14 16:33 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-04 17:26 [Starlink] the grinch meets cloudflare's christmas present Dave Taht
2023-01-04 19:20 ` [Starlink] [Rpm] " jf
2023-01-04 20:02 ` rjmcmahon
2023-01-04 21:16 ` Ulrich Speidel
2023-01-04 23:54 ` Bruce Perens
2023-01-05 2:48 ` Dave Collier-Brown
2023-01-05 3:11 ` Dick Roy
2023-01-05 11:25 ` Sebastian Moeller
2023-01-06 0:01 ` Dick Roy
2023-01-06 9:43 ` Sebastian Moeller
2023-01-05 6:11 ` rjmcmahon
2023-01-05 11:11 ` [Starlink] [Bloat] " Sebastian Moeller
2023-01-06 16:38 ` [Starlink] [LibreQoS] " MORTON JR., AL
2023-01-06 20:38 ` [Starlink] [Rpm] " rjmcmahon
2023-01-06 20:47 ` rjmcmahon
[not found] ` <89D796E75967416B9723211C183A8396@SRA6>
[not found] ` <a082b2436e6ba7892d2de8e0dfcc5acd@rjmcmahon.com>
[not found] ` <3696AEA5409D4303ABCBC439727A5E40@SRA6>
[not found] ` <CAKJdXWBb0VxSSoGAQTe3BXFLXCHd6NSspRnXd1frK2f66SLiUw@mail.gmail.com>
[not found] ` <CAA93jw6B_9-WE9EEFuac+FAH-2dcULk=_3i_HfhCSVSOxyM7Eg@mail.gmail.com>
[not found] ` <CA+Ld8r8hR8KF35Yv7A3hb1QvC9v9ka2Nh2J=HEm0XhPfvAAcag@mail.gmail.com>
[not found] ` <CAKJdXWC+aEy1b3vB-FFd+tnfT+Ni5O9bZ+p4kkhj-FzMPVGGcQ@mail.gmail.com>
[not found] ` <CAA93jw4DcBhA8CevRQoMbzjO-3Jt+emr+xvnJ-hUGkT+n0KJzg@mail.gmail.com>
[not found] ` <CH0PR02MB79800FF2E40CE037D6802D71D3FD9@CH0PR02MB7980.namprd02.prod.outlook.com>
[not found] ` <CAKJdXWDOFbzsam2C_24e9DLkc18ed4uhV51hOKVjDipk1Uhc2g@mail.gmail.com>
2023-01-13 4:08 ` [Starlink] insanely great waveform result for starlink Dave Taht
2023-01-13 4:26 ` Jonathan Bennett
2023-01-13 5:13 ` Ulrich Speidel
2023-01-13 5:25 ` Dave Taht
2023-01-13 12:27 ` Ulrich Speidel
2023-01-13 17:02 ` Jonathan Bennett
2023-01-13 17:26 ` Dave Taht
2023-01-13 17:41 ` Jonathan Bennett
2023-01-13 18:09 ` Nathan Owens
2023-01-13 20:30 ` Nathan Owens
2023-01-13 20:37 ` Dave Taht
2023-01-13 21:24 ` Nathan Owens
2023-01-13 20:49 ` Dave Taht
2023-01-13 21:25 ` Luis A. Cornejo
2023-01-13 21:30 ` Nathan Owens
2023-01-13 22:09 ` Jonathan Bennett
2023-01-13 22:30 ` Luis A. Cornejo
2023-01-13 22:32 ` Dave Taht
2023-01-13 22:36 ` Luis A. Cornejo
2023-01-13 22:42 ` Jonathan Bennett
2023-01-13 22:49 ` Dave Taht
2023-01-13 23:44 ` Dave Taht
[not found] ` <CALjsLJv5cbfHfkxqHnbjxoVHczspYvxc_jrshzs1CpHLEDWyew@mail.gmail.com>
2023-01-14 14:20 ` Nathan Owens
2023-01-14 15:53 ` Dave Taht
2023-01-14 16:33 ` Dave Taht
2023-01-14 15:52 ` Dave Taht
2023-01-13 20:25 ` Pat Jensen
2023-01-13 20:40 ` Dave Taht
2023-01-13 22:51 ` Ulrich Speidel
[not found] <mailman.2678.1672860038.1281.starlink@lists.bufferbloat.net>
2023-01-04 22:21 ` [Starlink] [Rpm] the grinch meets cloudflare's christmas present David P. Reed
2023-01-05 21:19 ` Christoph Paasch
2023-01-05 22:01 ` Dave Taht
2023-01-05 22:09 ` Sebastian Moeller
2023-01-05 10:07 David Fernández
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox