[Starlink] Starlink hidden buffers

Sebastian Moeller moeller0 at gmx.de
Mon May 15 02:36:08 EDT 2023


Hi David,

please see [SM] below.

> On May 15, 2023, at 05:33, David Lang via Starlink <starlink at lists.bufferbloat.net> wrote:
> 
> On Mon, 15 May 2023, Ulrich Speidel wrote:
> 
>> On 14/05/2023 9:00 pm, David Lang wrote:
>>> On Sun, 14 May 2023, Ulrich Speidel wrote:
>>> >> I just discovered that someone is manufacturing an adapter so you no >> longer have to cut the cable
>>> >>
>>> >> https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P
> 
>>> >>
>>> > I'll see whether I can get hold of one of these. Cutting a cable on a > university IT asset as an academic is not allowed here, except if it > doesn't meet electrical safety standards.
>>> >
>>> > Alternatively, has anyone tried the standard Starlink Ethernet adapter with > a PoE injector instead of the WiFi box? The adapter above seems to be like > the Starlink one (which also inserts into the cable between Dishy and > router).
>>> that connects you a 2nd ethernet port on the router, not on the dishy
>>> I just ordered one of those adapters, it will take a few weeks to arrive.
>> How do we know that the Amazon version doesn't do the same?
> 
> because it doesn't involve the router at all. It allows you to replace the router with anything you want.
> 
> People have documented how to cut the cable and crimp on a RJ45 connector, use a standard PoE injector, and connect to any router you want. I was preparing to do that (and probably still will for one cable to use a different locations to avoid having a 75 ft cable from the dish mounted on the roof of my van to the router a couple feet away), This appears to allow me to do the same functional thing, but without cutting the cable.
> 
>>> >> > I suspect they're handing over whole cells, not individual users, at a >> > time.
>>> >>
>>> >> I would guess the same (remember, in spite of them having launched >4000 >> satellites, this is still the early days, with the network changing as >> more launching)
>>> >>
>>> >> We've seen that it seems that there is only one satellite serving any cell >> one time.
>>> > But the reverse is almost certainly not true: Each satellite must serve > multiple cells.
>>> true, but while the satellite over a given area will change, the usage in that area isn't changing that much
> 
>> Exactly. But your underlying queue sits on the satellite, not in the area.
> 
> only if the satellite is where you have more input than output. That may be the case for users uploading, but for users downloading, I would expect that the bandwidth bottleneck would be from the Internet connected ground station to the satellite, with the satellite serving many cells but only having one uplink.
> 
>>> >> But remember that the system does know how much usage there is in the cell >> before they do the handoff. It's unknown if they do anything with that, or >> if they are just relaying based on geography. We also don't know what the >> bandwidth to the ground stations is compared to the dishy.
>>> > Well, we do know for NZ, sort of, based on the licences Starlink has here.
>>> what is the ground station bandwith?
>> 
>> https://rrf.rsm.govt.nz/ui/search/licence - seach for "Starlink"
>> 
>> ...all NZ licences in all their glory. Looking at Starlink SES (satellite earth station) TX (which is the interesting direction I guess):
>> 
>> - Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 29750.000000 TX (BW = 500 MHz)
>> - Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 28850.000000 TX (BW = 500 MHz)
>> - Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 28350.000000 TX (BW = 500 MHz)
>> - Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 28250.000000 TX (BW = 500 MHz)
>> - Awarua, Puwera, Hinds, Clevedon, Cromwell, Te Hana: 27750.000000 TX (BW = 500 MHz)
>> 
>> So 2.5 GHz up, licensed from 6 ground stations. Now I'm not convinced that they would use all of those from all locations simultaneously because of the risk of off-beam interference. They'll all be transmitting south, ballpark. If there was full re-use at all ground stations, we'd be looking at 15 GHz. If they are able to re-use on all antennas at each ground station, then we're looking at 9 golf balls each in Puwera, Te Hana, Clevedon, Hinds and Cromwell, and an unknown number at Awarua. Assuming 9 there, we'd be looking at 135 GHz all up max.
>> 
>> Awarua and Cromwell are 175 km apart, Hinds another 220 km from Cromwell, then it's a hop of about 830 km to Clevedon, and from there another 100 km to Te Hana, which is another 53 km from Puwera, so keeping them all out of each other's hair all the time might be a bit difficult.
>> 
>> Lots of other interesting info in the licenses, such as EIRP, in case you're wanting to do link budgets.
> 
> I was asking more in terms of Gb/s rather than MHz of bandwidth. Dedicated ground stations with bigger antennas, better filters, more processing and overall a much higher budget can get much better data rates out of a given amount of bandwidth than the user end stations will.
> 
> it's also possible (especially with bigger antennas) for one ground station location to talk to multiple different satellites at once (the aiming of the antennas can isolate the signals from each other)
> 
>>> As latency changes, figuring out if it's extra distance that must be traveled, or buffering is hard. does the latency stay roughly the same until the next satellite change? or does it taper off?
> 
>> Good question. You would expect step changes in physical latency between satellites, but also gradual change related to satellite movement. Plus of course any rubble thrown into any queue by something suddenly turning up on that path. Don't forget that it's not just cells now, we're also talking up- and downlink for the laser ISLs, at least in some places.
> 
> how far do the satellites move in 15 min and what effect would that have on latency (I would assume that most of the time, the satellites are switched to as they are getting nearer the two stations, so most of the time, I would expect a slight reduction in latency for ~7 min and then a slight increase for ~7 min, but I would not expect that this would be a large variation
> 
>>> If it stays the same, I would suspect that you are actually hitting a different ground station and there is a VPN backhaul to your egress point to the regular Internet (which doesn't support mobile IP addresses) for that cycle. If it tapers off, then I could buy bufferbloat that gets resolved as TCP backs off.
>> 
>> Yes, quite sorting out which part of your latency is what is the million dollar question here...
>> 
>> We saw significant RTT changes here during the recent cyclone over periods of several hours, and these came in steps (see below), with the initial change being a downward one. Averages are over 60 pings (the time scale isn't 100% true as we used "one ping, one second" timing) here.
>> 
>> 
>> We're still not sure whether to attribute this to load change or ground station changes. There were a lot of power outages, especially in Auckland's lifestyle block belt, which teems with Starlink users, but all three North Island ground stations were also in areas affected by power outages (although the power companies concerned don't provide the level of detail to establish whether they were affected). It's also not clear what, if any, backup power arrangements they have). At ~25 ms, the step changes in RTT are too large be the result of a switch in ground stations, though, the path differences just aren't that large. You'd also expect a ground station outage to result in longer RTTs, not shorter ones, if you need to re-route via another ground station. One explanation might be users getting cut off if they relied on one particular ground station for bent pipe ops - but that would not explain this order of magnitude effect as I'd expect that number to be small. So maybe power outages at the user end after all. But that would then tell us that these are load-dependent queuing delays. Moreover, since those load changes wouldn't have involved the router at our site, we can conclude that these are queue sojourn times in the Starlink network.
> 
> I have two starlink dishes in the southern california area, I'm going to put one on the low-priority mobile plan shortly. These are primarily used for backup communication, so I would be happy to add something to them to do latency monitoring.


	[SM] I would consider using irtt for that (like running in for 15 minutes with say 5ms spacing a few times a day, sometimes together with a saturating load sometimes without), this is a case where OWDs are especially interesting and irtt will also report the direction in which packets were lost. Maybe Dave (once back from his time-off) has an idea about which irtt reflector to use?


> In looking at what geo-location reports my location as, I see it wander up and down the west coast, from the Los Angeles area all the way up to Canada.

	[SM] Demonstrating once more that geoIP is just a heuristic ;)


> 
>>> I think that active queue management on the sending side of the bottleneck will handle it fairly well. It doesn't have to do calculations based on what the bandwidth is, it just needs to know what it has pending to go out.
> 
>> Understood - but your customer for AQM is the sending TCP client, and there are two questions here: (a) Does your AQM handle rapid load changes and (b) how do your TCP clients actually respond to your AQM's handling?
> 
> AQM allocates the available bandwidth between different connections (usually different users)

	[SM] Not sure AQM is actually defined that stringently, I was under the impression anything other that FIFO with tail drop would already count as AQM, no?

> When it does this indirectly for inbound traffic by delaying acks, the results depend on the senders handling of these indirect signals that were never intended for this purpose.

	[SM] Hmm, ACKs where intended to be a feed-back mechanism for the sender to use to asses the in-flight data, so I am not sure whether delaying ACKs is something that was never envisaged by TCP's creators?

> 
> But when it does this directly on the sending side, it doesn't matter what the senders want, their data WILL be managed to the priority/bandwidth that the AQM sets, and eventually their feedback is dropped packets, which everyone who is legitimate responds to.

	[SM] Some more quickly than others though, looking at you BBR ;)


> But even if they don't respond (say a ping flood or DoS attack), the AQM will limit the damage to that connection, allowing the other connections trying to use that link to continue to function.

	[SM] Would that not require an AQM with effectively a multi-queue scheduler? I think it seems clear that starlink uses some form of AQM (potentially ARED), but on the scheduler/queue side there see to be competing claims ranging from single-queue FIFO (with ARED) to FQ-scheduler. Not having a starlink-link I can not test any of this so all I can report is competing reports from starlink users...

Regards
	Sebastian


> 
> David Lang
> _______________________________________________
> Starlink mailing list
> Starlink at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



More information about the Starlink mailing list