[Starlink] Starlink hidden buffers

Ulrich Speidel u.speidel at auckland.ac.nz
Sun May 14 22:41:58 EDT 2023


On 14/05/2023 9:00 pm, David Lang wrote:
> On Sun, 14 May 2023, Ulrich Speidel wrote:
>
> >> I just discovered that someone is manufacturing an adapter so you 
> no longer
> >> have
> >> to cut the cable
> >>
> >> 
> https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P 
> <https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P> 
>
> >> 
> <https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P 
> <https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P>>
> >>
> > I'll see whether I can get hold of one of these. Cutting a cable on a
> > university IT asset as an academic is not allowed here, except if it 
> doesn't
> > meet electrical safety standards.
> >
> > Alternatively, has anyone tried the standard Starlink Ethernet 
> adapter with a
> > PoE injector instead of the WiFi box? The adapter above seems to be 
> like the
> > Starlink one (which also inserts into the cable between Dishy and 
> router).
>
> that connects you a 2nd ethernet port on the router, not on the dishy
>
> I just ordered one of those adapters, it will take a few weeks to arrive.
How do we know that the Amazon version doesn't do the same?
>
> >> > Put another way: If you have a protocol (TCP) that is designed to
> >> > reasonably
> >> > expect that its current cwnd is OK to use for now is put into a 
> situation
> >> > where there are relatively frequent, huge and lasting step changes in
> >> > available BDP within subsecond periods, are your underlying 
> assumptions
> >> > still
> >> > valid?
> >>
> >> I think that with interference from other APs, WIFI suffers at 
> least as much
> >> unpredictable changes to the available bandwidth.
>
> > Really? I'm thinking stuff like the sudden addition of packets from
> > potentially dozens of TCP flows with large cwnd's?
>
> vs losing 90% of your available bandwidth to interference?? I think 
> it's going
> to be a similar problem
Hm. Not convinced, but I take your point...
>
> >>
> >> > I suspect they're handing over whole cells, not individual users, 
> at a
> >> time.
> >>
> >> I would guess the same (remember, in spite of them having launched 
> >4000
> >> satellites, this is still the early days, with the network changing 
> as more
> >> launching)
> >>
> >> We've seen that it seems that there is only one satellite serving 
> any cell
> >> one time.
>
> > But the reverse is almost certainly not true: Each satellite must serve
> > multiple cells.
>
> true, but while the satellite over a given area will change, the usage 
> in that
> area isn't changing that much
Exactly. But your underlying queue sits on the satellite, not in the area.
>
> >> But remember that the system does know how much usage there is in the
> >> cell before they do the handoff. It's unknown if they do anything with
> >> that, or
> >> if they are just relaying based on geography. We also don't know 
> what the
> >> bandwidth to the ground stations is compared to the dishy.
>
> > Well, we do know for NZ, sort of, based on the licences Starlink has 
> here.
>
> what is the ground station bandwith?

https://rrf.rsm.govt.nz/ui/search/licence - seach for "Starlink"

...all NZ licences in all their glory. Looking at Starlink SES 
(satellite earth station) TX (which is the interesting direction I guess):

- Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 29750.000000 TX 
(BW = 500 MHz)
- Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 28850.000000 TX 
(BW = 500 MHz)
- Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 28350.000000 TX 
(BW = 500 MHz)
- Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 28250.000000 TX 
(BW = 500 MHz)
- Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 27750.000000 TX 
(BW = 500 MHz)

So 2.5 GHz up, licensed from 6 ground stations. Now I'm not convinced 
that they would use all of those from all locations simultaneously 
because of the risk of off-beam interference. They'll all be 
transmitting south, ballpark. If there was full re-use at all ground 
stations, we'd be looking at 15 GHz. If they are able to re-use on all 
antennas at each ground station, then we're looking at 9 golf balls each 
in Puwera, Te Hana, Clevedon, Hinds and Cromwell, and an unknown number 
at Awarua. Assuming 9 there, we'd be looking at 135 GHz all up max.

Awarua and Cromwell are 175 km apart, Hinds another 220 km from 
Cromwell, then it's a hop of about 830 km to Clevedon, and from there 
another 100 km to Te Hana, which is another 53 km from Puwera, so 
keeping them all out of each other's hair all the time might be a bit 
difficult.

Lots of other interesting info in the licenses, such as EIRP, in case 
you're wanting to do link budgets.

>
> >> And remember that for every cell that a satellite takes over, it's 
> also
> >> giving away one cell at the same time.
>
> > Yes, except that some cells may have no users in them and some of 
> them have a
> > lot (think of a satellite flying into range of California from the 
> Pacific,
> > dropping over-the-water cells and acquiring land-based ones).
>
> >> I'm not saying that the problem is trivial, but just that it's not 
> unique
>
> > What makes me suspicious here that it's not the usual bufferbloat 
> problem is
> > this: With conventional bufferbloat and FIFOs, you'd expect standing 
> queues,
> > right? With Starlink, we see the queues emptying relatively 
> occasionally with
> > RTTs in the low 20 ms, and in some cases under 20 ms even. With 
> large ping
> > packets (1500 bytes).
>
> it's not directly a bufferbloat problem, bufferbloat is a side effect 
> (At most)
>
> we know that the avaialble starlink bandwidth is chopped into 
> timeslots (sorry,
> don't remember how many), and I could see the possibility of there 
> being the
> same number of timeslots down to the ground station as up from the 
> dishies, and
> if the bottleneck is at the uplink from the ground station, then 
> things would
> queue there.
>
> As latency changes, figuring out if it's extra distance that must be 
> traveled,
> or buffering is hard. does the latency stay roughly the same until the 
> next
> satellite change? or does it taper off?
Good question. You would expect step changes in physical latency between 
satellites, but also gradual change related to satellite movement. Plus 
of course any rubble thrown into any queue by something suddenly turning 
up on that path. Don't forget that it's not just cells now, we're also 
talking up- and downlink for the laser ISLs, at least in some places.
>
> If it stays the same, I would suspect that you are actually hitting a 
> different
> ground station and there is a VPN backhaul to your egress point to the 
> regular
> Internet (which doesn't support mobile IP addresses) for that cycle. 
> If it
> tapers off, then I could buy bufferbloat that gets resolved as TCP 
> backs off.

Yes, quite sorting out which part of your latency is what is the million 
dollar question here...

We saw significant RTT changes here during the recent cyclone over 
periods of several hours, and these came in steps (see below), with the 
initial change being a downward one. Averages are over 60 pings (the 
time scale isn't 100% true as we used "one ping, one second" timing) here.


We're still not sure whether to attribute this to load change or ground 
station changes. There were a lot of power outages, especially in 
Auckland's lifestyle block belt, which teems with Starlink users, but 
all three North Island ground stations were also in areas affected by 
power outages (although the power companies concerned don't provide the 
level of detail to establish whether they were affected). It's also not 
clear what, if any, backup power arrangements they have). At ~25 ms, the 
step changes in RTT are too large be the result of a switch in ground 
stations, though, the path differences just aren't that large. You'd 
also expect a ground station outage to result in longer RTTs, not 
shorter ones, if you need to re-route via another ground station. One 
explanation might be users getting cut off if they relied on one 
particular ground station for bent pipe ops - but that would not explain 
this order of magnitude effect as I'd expect that number to be small. So 
maybe power outages at the user end after all. But that would then tell 
us that these are load-dependent queuing delays. Moreover, since those 
load changes wouldn't have involved the router at our site, we can 
conclude that these are queue sojourn times in the Starlink network.

>
> my main point in replying several messages ago was to point out other 
> scenarios
> where the load changes rapidly and/or the available bandwidth changes 
> rapidly.
> And you are correct that it is generally not handled well by common 
> equipment.
>
> I think that active queue management on the sending side of the 
> bottleneck will
> handle it fairly well. It doesn't have to do calculations based on 
> what the
> bandwidth is, it just needs to know what it has pending to go out.
Understood - but your customer for AQM is the sending TCP client, and 
there are two questions here: (a) Does your AQM handle rapid load 
changes and (b) how do your TCP clients actually respond to your AQM's 
handling?
>
> David Lang

-- 
****************************************************************
Dr. Ulrich Speidel

School of Computer Science

Room 303S.594 (City Campus)

The University of Auckland
u.speidel at auckland.ac.nz  
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230515/da1cc701/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gabrielle_rtt_pl.png
Type: image/png
Size: 23782 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20230515/da1cc701/attachment-0001.png>


More information about the Starlink mailing list