[Starlink] insanely great waveform result for starlink
Luis A. Cornejo
luis.a.cornejo at gmail.com
Fri Jan 13 17:36:54 EST 2023
I ran it with:
flent -H dallas.starlink.taht.net -t starlink_vs_irtt --step-size=.05
--socket-stats --test-parameter=upload_streams=4 tcp_nup
On Fri, Jan 13, 2023 at 4:33 PM Dave Taht via Starlink <
starlink at lists.bufferbloat.net> wrote:
> thank you all. in both the flent cases there appears to be no tcp_rtt
> statistics, did you run with --socket-stats?
>
> (That seems to be a new bug, both with sampling correctly, and it's either
> in newer linuxes or in flent itself. I hate to ask ya but could you install
> the git version of flent?)
>
> Thank you for the packet capture!!!! I'm still downloading.
>
> Anyway, the "celabon" flent plot clearly shows the inverse relationship
> between latency and throughput still in this starlink terminal, so there is
> no AQM in play there, darn it. (in my fq_codeled world the latency stays
> flat, only the throughput changes)
>
> So I am incredibly puzzled now at the ostensibly awesome waveform test
> result (and need to look at that capture, and/or get tcp rtt stats)
>
> The other plot (luis's) shows incredibly consistent latency and bounded
> throughput at about 6mbit.
>
> Patiently awaiting that download to complete.
>
>
> On Fri, Jan 13, 2023 at 2:10 PM Jonathan Bennett via Starlink <
> starlink at lists.bufferbloat.net> wrote:
>
>> The irtt run finished a few seconds before the flent run, but here are
>> the results:
>>
>>
>> https://drive.google.com/file/d/1FKve13ssUMW1LLWOXLM2931Yx6uMHw8K/view?usp=share_link
>>
>> https://drive.google.com/file/d/1ZXd64A0pfUedLr3FyhDNTHA7vxv8S2Gk/view?usp=share_link
>>
>> https://drive.google.com/file/d/1rx64UPQHHz3IMNiJtb1oFqtqw2DvhvEE/view?usp=share_link
>>
>>
>> [image: image.png]
>> [image: image.png]
>>
>>
>> Jonathan Bennett
>> Hackaday.com
>>
>>
>> On Fri, Jan 13, 2023 at 3:30 PM Nathan Owens via Starlink <
>> starlink at lists.bufferbloat.net> wrote:
>>
>>> Here's Luis's run -- the top line below the edge of the graph is 200ms
>>> [image: Screenshot 2023-01-13 at 1.30.03 PM.png]
>>>
>>>
>>> On Fri, Jan 13, 2023 at 1:25 PM Luis A. Cornejo <
>>> luis.a.cornejo at gmail.com> wrote:
>>>
>>>> Dave,
>>>>
>>>> Here is a run the way I think you wanted it.
>>>>
>>>> irtt running for 5 min to your dallas server, followed by a waveform
>>>> test, then a few seconds of inactivity, cloudflare test, a few more secs of
>>>> nothing, flent test to dallas. Packet capture is currently uploading (will
>>>> be done in 20 min or so), irtt JSON also in there (.zip file):
>>>>
>>>>
>>>> https://drive.google.com/drive/folders/1FLWqrzNcM8aK-ZXQywNkZGFR81Fnzn-F?usp=share_link
>>>>
>>>> -Luis
>>>>
>>>> On Fri, Jan 13, 2023 at 2:50 PM Dave Taht via Starlink <
>>>> starlink at lists.bufferbloat.net> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 13, 2023 at 12:30 PM Nathan Owens <nathan at nathan.io>
>>>>> wrote:
>>>>>
>>>>>> Here's the data visualization for Johnathan's Data
>>>>>>
>>>>>> [image: Screenshot 2023-01-13 at 12.29.15 PM.png]
>>>>>>
>>>>>> You can see the path change at :12, :27, :42, :57 after the minute.
>>>>>> Some paths are clearly busier than others with increased loss, latency, and
>>>>>> jitter.
>>>>>>
>>>>>
>>>>> I am so glad to see loss and bounded delay here. Also a bit of rigor
>>>>> regarding what traffic was active locally vs on the path would be nice,
>>>>> although it seems to line up with the known 15s starlink switchover thing
>>>>> (need a name for this), in this case, doing a few speedtests
>>>>> while that irtt is running would show the impact(s) of whatever else
>>>>> they are up to.
>>>>>
>>>>> It would also be my hope that the loss distribution in the middle
>>>>> portion of this data is good, not bursty, but we don't have a tool to take
>>>>> apart that. (I am so hopeless at json)
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 13, 2023 at 10:09 AM Nathan Owens <nathan at nathan.io>
>>>>>> wrote:
>>>>>>
>>>>>>> I’ll run my visualization code on this result this afternoon and
>>>>>>> report back!
>>>>>>>
>>>>>>> On Fri, Jan 13, 2023 at 9:41 AM Jonathan Bennett via Starlink <
>>>>>>> starlink at lists.bufferbloat.net> wrote:
>>>>>>>
>>>>>>>> The irtt command, run with normal, light usage:
>>>>>>>> https://drive.google.com/file/d/1SiVCiUYnx7nDTxIVOY5w-z20S2O059rA/view?usp=share_link
>>>>>>>>
>>>>>>>> Jonathan Bennett
>>>>>>>> Hackaday.com
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 13, 2023 at 11:26 AM Dave Taht <dave.taht at gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> packet caps would be nice... all this is very exciting news.
>>>>>>>>>
>>>>>>>>> I'd so love for one or more of y'all reporting such great uplink
>>>>>>>>> results nowadays to duplicate and re-plot the original irtt tests
>>>>>>>>> we
>>>>>>>>> did:
>>>>>>>>>
>>>>>>>>> irtt client -i3ms -d300s myclosestservertoyou.starlink.taht.net
>>>>>>>>> -o whatever.json
>>>>>>>>>
>>>>>>>>> They MUST have changed their scheduling to get such amazing uplink
>>>>>>>>> results, in addition to better queue management.
>>>>>>>>>
>>>>>>>>> (for the record, my servers are de, london, fremont, sydney,
>>>>>>>>> dallas,
>>>>>>>>> newark, atlanta, singapore, mumbai)
>>>>>>>>>
>>>>>>>>> There's an R and gnuplot script for plotting that output around
>>>>>>>>> here
>>>>>>>>> somewhere (I have largely personally put down the starlink project,
>>>>>>>>> loaning out mine) - that went by on this list... I should have
>>>>>>>>> written
>>>>>>>>> a blog entry so I can find that stuff again.
>>>>>>>>>
>>>>>>>>> On Fri, Jan 13, 2023 at 9:02 AM Jonathan Bennett via Starlink
>>>>>>>>> <starlink at lists.bufferbloat.net> wrote:
>>>>>>>>> >
>>>>>>>>> >
>>>>>>>>> > On Fri, Jan 13, 2023 at 6:28 AM Ulrich Speidel via Starlink <
>>>>>>>>> starlink at lists.bufferbloat.net> wrote:
>>>>>>>>> >>
>>>>>>>>> >> On 13/01/2023 6:13 pm, Ulrich Speidel wrote:
>>>>>>>>> >> >
>>>>>>>>> >> > From Auckland, New Zealand, using a roaming subscription, it
>>>>>>>>> puts me
>>>>>>>>> >> > in touch with a server 2000 km away. OK then:
>>>>>>>>> >> >
>>>>>>>>> >> >
>>>>>>>>> >> > IP address: nix six.
>>>>>>>>> >> >
>>>>>>>>> >> > My thoughts shall follow later.
>>>>>>>>> >>
>>>>>>>>> >> OK, so here we go.
>>>>>>>>> >>
>>>>>>>>> >> I'm always a bit skeptical when it comes to speed tests -
>>>>>>>>> they're really
>>>>>>>>> >> laden with so many caveats that it's not funny. I took our new
>>>>>>>>> work
>>>>>>>>> >> Starlink kit home in December to give it a try and the other
>>>>>>>>> day finally
>>>>>>>>> >> got around to set it up. It's on a roaming subscription because
>>>>>>>>> our
>>>>>>>>> >> badly built-up campus really isn't ideal in terms of a clear
>>>>>>>>> view of the
>>>>>>>>> >> sky. Oh - and did I mention that I used the Starlink Ethernet
>>>>>>>>> adapter,
>>>>>>>>> >> not the WiFi?
>>>>>>>>> >>
>>>>>>>>> >> Caveat 1: Location, location. I live in a place where the best
>>>>>>>>> Starlink
>>>>>>>>> >> promises is about 1/3 in terms of data rate you can actually
>>>>>>>>> get from
>>>>>>>>> >> fibre to the home at under half of Starlink's price. Read:
>>>>>>>>> There are few
>>>>>>>>> >> Starlink users around. I might be the only one in my suburb.
>>>>>>>>> >>
>>>>>>>>> >> Caveat 2: Auckland has three Starlink gateways close by:
>>>>>>>>> Clevedon (which
>>>>>>>>> >> is at a stretch daytrip cycling distance from here), Te Hana
>>>>>>>>> and Puwera,
>>>>>>>>> >> the most distant of the three and about 130 km away from me as
>>>>>>>>> the crow
>>>>>>>>> >> flies. Read: My dishy can use any satellite that any of these
>>>>>>>>> three can
>>>>>>>>> >> see, and then depending on where I put it and how much of the
>>>>>>>>> southern
>>>>>>>>> >> sky it can see, maybe also the one in Hinds, 840 km away,
>>>>>>>>> although that
>>>>>>>>> >> is obviously stretching it a bit. Either way, that's plenty of
>>>>>>>>> options
>>>>>>>>> >> for my bits to travel without needing a lot of handovers. Why?
>>>>>>>>> Easy: If
>>>>>>>>> >> your nearest teleport is close by, then the set of satellites
>>>>>>>>> that the
>>>>>>>>> >> teleport can see and the set that you can see is almost the
>>>>>>>>> same, so you
>>>>>>>>> >> can essentially stick with the same satellite while it's in
>>>>>>>>> view for you
>>>>>>>>> >> because it'll also be in view for the teleport. Pretty much any
>>>>>>>>> bird
>>>>>>>>> >> above you will do.
>>>>>>>>> >>
>>>>>>>>> >> And because I don't get a lot of competition from other users
>>>>>>>>> in my area
>>>>>>>>> >> vying for one of the few available satellites that can see both
>>>>>>>>> us and
>>>>>>>>> >> the teleport, this is about as good as it gets at 37S latitude.
>>>>>>>>> If I'd
>>>>>>>>> >> want it any better, I'd have to move a lot further south.
>>>>>>>>> >>
>>>>>>>>> >> It'd be interesting to hear from Jonathan what the availability
>>>>>>>>> of home
>>>>>>>>> >> broadband is like in the Dallas area. I note that it's at a
>>>>>>>>> lower
>>>>>>>>> >> latitude (33N) than Auckland, but the difference isn't huge. I
>>>>>>>>> notice
>>>>>>>>> >> two teleports each about 160 km away, which is also not too
>>>>>>>>> bad. I also
>>>>>>>>> >> note Starlink availability in the area is restricted at the
>>>>>>>>> moment -
>>>>>>>>> >> oversubscribed? But if Jonathan gets good data rates, then that
>>>>>>>>> means
>>>>>>>>> >> that competition for bird capacity can't be too bad - for
>>>>>>>>> whatever reason.
>>>>>>>>> >
>>>>>>>>> > I'm in Southwest Oklahoma, but Dallas is the nearby Starlink
>>>>>>>>> gateway. In cities, like Dallas, and Lawton where I live, there are good
>>>>>>>>> broadband options. But there are also many people that live outside cities,
>>>>>>>>> and the options are much worse. The low density userbase in rural Oklahoma
>>>>>>>>> and Texas is probably ideal conditions for Starlink.
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> Caveat 3: Backhaul. There isn't just one queue between me and
>>>>>>>>> whatever I
>>>>>>>>> >> talk to in terms of my communications. Traceroute shows about
>>>>>>>>> 10 hops
>>>>>>>>> >> between me and the University of Auckland via Starlink. That's
>>>>>>>>> 10
>>>>>>>>> >> queues, not one. Many of them will have cross traffic. So it's
>>>>>>>>> a bit
>>>>>>>>> >> hard to tell where our packets really get to wait or where they
>>>>>>>>> get
>>>>>>>>> >> dropped. The insidious bit here is that a lot of them will be
>>>>>>>>> between 1
>>>>>>>>> >> Gb/s and 10 Gb/s links, and with a bit of cross traffic, they
>>>>>>>>> can all
>>>>>>>>> >> turn into bottlenecks. This isn't like a narrowband GEO link of
>>>>>>>>> a few
>>>>>>>>> >> Mb/s where it's obvious where the dominant long latency
>>>>>>>>> bottleneck in
>>>>>>>>> >> your TCP connection's path is. Read: It's pretty hard to tell
>>>>>>>>> whether a
>>>>>>>>> >> drop in "speed" is due to a performance issue in the Starlink
>>>>>>>>> system or
>>>>>>>>> >> somewhere between Starlink's systems and the target system.
>>>>>>>>> >>
>>>>>>>>> >> I see RTTs here between 20 ms and 250 ms, where the physical
>>>>>>>>> latency
>>>>>>>>> >> should be under 15 ms. So there's clearly a bit of buffer here
>>>>>>>>> along the
>>>>>>>>> >> chain that occasionally fills up.
>>>>>>>>> >>
>>>>>>>>> >> Caveat 4: Handovers. Handover between birds and teleports is
>>>>>>>>> inevitably
>>>>>>>>> >> associated with a change in RTT and in most cases also available
>>>>>>>>> >> bandwidth. Plus your packets now arrive at a new queue on a new
>>>>>>>>> >> satellite while your TCP is still trying to respond to whatever
>>>>>>>>> it
>>>>>>>>> >> thought the queue on the previous bird was doing. Read:
>>>>>>>>> Whatever your
>>>>>>>>> >> cwnd is immediately after a handover, it's probably not what it
>>>>>>>>> should be.
>>>>>>>>> >>
>>>>>>>>> >> I ran a somewhat hamstrung (sky view restricted) set of four
>>>>>>>>> Ookla
>>>>>>>>> >> speedtest.net tests each to five local servers. Average upload
>>>>>>>>> rate was
>>>>>>>>> >> 13 Mb/s, average down 75.5 Mb/s. Upload to the server of the
>>>>>>>>> ISP that
>>>>>>>>> >> Starlink seems to be buying its local connectivity from (Vocus
>>>>>>>>> Group)
>>>>>>>>> >> varied between 3.04 and 14.38 Mb/s, download between 23.33 and
>>>>>>>>> 52.22
>>>>>>>>> >> Mb/s, with RTTs between 37 and 56 ms not correlating well to
>>>>>>>>> rates
>>>>>>>>> >> observed. In fact, they were the ISP with consistently the
>>>>>>>>> worst rates.
>>>>>>>>> >>
>>>>>>>>> >> Another ISP (MyRepublic) scored between 11.81 and 21.81 Mb/s up
>>>>>>>>> and
>>>>>>>>> >> between 106.5 and 183.8 Mb/s down, again with RTTs badly
>>>>>>>>> correlating
>>>>>>>>> >> with rates. Average RTT was the same as for Vocus.
>>>>>>>>> >>
>>>>>>>>> >> Note the variation though: More or less a factor of two between
>>>>>>>>> highest
>>>>>>>>> >> and lowest rates for each ISP. Did MyRepublic just get lucky in
>>>>>>>>> my
>>>>>>>>> >> tests? Or is there something systematic behind this? Way too
>>>>>>>>> few tests
>>>>>>>>> >> to tell.
>>>>>>>>> >>
>>>>>>>>> >> What these tests do is establish a ballpark.
>>>>>>>>> >>
>>>>>>>>> >> I'm currently repeating tests with dish placed on a trestle
>>>>>>>>> closer to
>>>>>>>>> >> the heavens. This seems to have translated into fewer outages /
>>>>>>>>> ping
>>>>>>>>> >> losses (around 1/4 of what I had yesterday with dishy on the
>>>>>>>>> ground on
>>>>>>>>> >> my deck). Still good enough for a lengthy video Skype call with
>>>>>>>>> my folks
>>>>>>>>> >> in Germany, although they did comment about reduced video
>>>>>>>>> quality. But
>>>>>>>>> >> maybe that was the lighting or the different background as I
>>>>>>>>> wasn't in
>>>>>>>>> >> my usual spot with my laptop when I called them.
>>>>>>>>> >
>>>>>>>>> > Clear view of the sky is king for Starlink reliability. I've got
>>>>>>>>> my dishy mounted on the back fence, looking up over an empty field, so it's
>>>>>>>>> pretty much best-case scenario here.
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> --
>>>>>>>>> >>
>>>>>>>>> >> ****************************************************************
>>>>>>>>> >> Dr. Ulrich Speidel
>>>>>>>>> >>
>>>>>>>>> >> School of Computer Science
>>>>>>>>> >>
>>>>>>>>> >> Room 303S.594 (City Campus)
>>>>>>>>> >>
>>>>>>>>> >> The University of Auckland
>>>>>>>>> >> u.speidel at auckland.ac.nz
>>>>>>>>> >> http://www.cs.auckland.ac.nz/~ulrich/
>>>>>>>>> >> ****************************************************************
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> _______________________________________________
>>>>>>>>> >> Starlink mailing list
>>>>>>>>> >> Starlink at lists.bufferbloat.net
>>>>>>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>> >
>>>>>>>>> > _______________________________________________
>>>>>>>>> > Starlink mailing list
>>>>>>>>> > Starlink at lists.bufferbloat.net
>>>>>>>>> > https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>>>>>
>>>>>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>>>>>> Dave Täht CEO, TekLibre, LLC
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink at lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> This song goes out to all the folk that thought Stadia would work:
>>>>>
>>>>> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
>>>>> Dave Täht CEO, TekLibre, LLC
>>>>> _______________________________________________
>>>>> Starlink mailing list
>>>>> Starlink at lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> This song goes out to all the folk that thought Stadia would work:
>
> https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
> Dave Täht CEO, TekLibre, LLC
> _______________________________________________
> Starlink mailing list
> Starlink at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230113/f8115b64/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screenshot 2023-01-13 at 12.29.15 PM.png
Type: image/png
Size: 1380606 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230113/f8115b64/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screenshot 2023-01-13 at 1.30.03 PM.png
Type: image/png
Size: 855840 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230113/f8115b64/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 296989 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230113/f8115b64/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 222655 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230113/f8115b64/attachment-0007.png>
More information about the Starlink
mailing list