[Starlink] Starlink hidden buffers

David Lang david at lang.hm
Sat May 13 19:00:43 EDT 2023


On Sun, 14 May 2023, Ulrich Speidel via Starlink wrote:

> I'd also imaging total user numbers to be lower and the 
> bandwidth demand per user to be less (hands up who takes their 50" TV onto 
> trains to watch Netflix in HD?).

most phones are >HD resolution, and the higher end are >4k resolution. the 
network bandwidth doesn't care if the resulting screen is 5" or 50", the 
resolution is all that matters.

> The other is that most places have 3+ networks serving the train line, which 
> brings down user numbers, or you have in-train cells, which communicate with 
> off-train POPs that have no extra users.

but the density of people in a train car is MUCH higher than in an office, even 
if split a couple of ways.

David Lang

> But yes, good question IMHO!
>
> Cheers,
>
> Ulrich
>
> On 13/05/2023 11:20 pm, Sebastian Moeller wrote:
>> Hi Ulrich,
>> 
>> This situation is not completely different from say a train full of LTE/5G 
>> users moving through a set of cells with already established 'static' 
>> users, no?
>> 
>> 
>> On 13 May 2023 12:10:17 CEST, Ulrich Speidel via Starlink 
>> <starlink at lists.bufferbloat.net> wrote:
>>
>>     Here's a bit of a question to you all. See what you make of it.
>>     I've been thinking a bit about the latencies we see in the
>>     Starlink network. This is why this list exist (right, Dave?). So
>>     what do we know? 1) We know that RTTs can be in the 100's of ms
>>     even in what appear to be bent-pipe scenarios where the physical
>>     one-way path should be well under 3000 km, with physical RTT under
>>     20 ms. 2) We know from plenty of traceroutes that these RTTs
>>     accrue in the Starlink network, not between the Starlink handover
>>     point (POP) to the Internet. 3) We know that they aren't an
>>     artifact of the Starlink WiFi router (our traceroutes were done
>>     through their Ethernet adaptor, which bypasses the router), so
>>     they must be delays on the satellites or the teleports. 4) We know
>>     that processing delay isn't a huge factor because we also see RTTs
>>     well under 30 ms. 5) That leaves queuing delays. This issue has
>>     been known for a while now. Starlink have been innovating their
>>     heart out around pretty much everything here - and yet, this
>>     bufferbloat issue hasn't changed, despite Dave proposing what
>>     appears to be an easy fix compared to a lot of other things they
>>     have done. So what are we possibly missing here? Going back to
>>     first principles: The purpose of a buffer on a network device is
>>     to act as a shock absorber against sudden traffic bursts. If I
>>     want to size that buffer correctly, I need to know at the very
>>     least (paraphrasing queueing theory here) something about my
>>     packet arrival process. If I look at conventional routers, then
>>     that arrival process involves traffic generated by a user
>>     population that changes relatively slowly: WiFi users come and go.
>>     One at a time. Computers in a company get turned on and off and
>>     rebooted, but there are no instantaneous jumps in load - you don't
>>     suddenly have a hundred users in the middle of watching Netflix
>>     turning up that weren't there a second ago. Most of what we know
>>     about Internet traffic behaviour is based on this sort of network,
>>     and this is what we've designed our queuing systems around, right?
>>     Observation: Starlink potentially breaks that paradigm. Why?
>>     Imagine a satellite X handling N users that are located closely
>>     together in a fibre-less rural town watching a range of movies.
>>     Assume that N is relatively large. Say these users are currently
>>     handled through ground station teleport A some distance away to
>>     the west (bent pipe with switching or basic routing on the
>>     satellite). X is in view of both A and the N users, but with X
>>     being a LEO satellite, that bliss doesn't last. Say X is moving to
>>     the (south- or north-)east and out of A's range. Before connection
>>     is lost, the N users migrate simultaneously to a new satellite Y
>>     that has moved into view of both A and themselves. Y is doing so
>>     from the west and is also catering to whatever users it can see
>>     there, and let's suppose has been using A for a while already. The
>>     point is that the user load on X and Y from users other than our N
>>     friends could be quite different. E.g., one of them could be over
>>     the ocean with few users, the other over countryside with a lot of
>>     customers. The TCP stacks of our N friends are (hopefully)
>>     somewhat adapted to the congestion situation on X with their cwnds
>>     open to reasonable sizes, but they are now thrown onto a
>>     completely different congestion scenario on Y. Similarly, say that
>>     Y had less than N users before the handover. For existing users on
>>     Y, there is now a huge surge of competing traffic that wasn't
>>     there a second ago - surging far faster than we would expect this
>>     to happen in a conventional network because there is no slow start
>>     involved. This seems to explain the huge jumps you see on Starlink
>>     in TCP goodput over time. But could this be throwing a few
>>     spanners into the works in terms of queuing? Does it invalidate
>>     what we know about queues and queue management? Would surges like
>>     these justify larger buffers?
>> 
>> -- 
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
>
-------------- next part --------------
_______________________________________________
Starlink mailing list
Starlink at lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


More information about the Starlink mailing list