[Starlink] Starlink "beam spread"

Hayden Simon h at uber.nz
Wed Aug 31 03:31:13 EDT 2022


Throwing it out here that I love this group. I *really* enjoy that you all understand the challenges, limitations and potential points for optimisation so well.

It’s a significant shift from my daily grind. 


HAYDEN SIMON
UBER GROUP LIMITED
MANAGING DIRECTOR - SUPREME OVERLORD
E: h at uber.nz
M: 021 0707 014
W: www.uber.nz
53 PORT ROAD | PO BOX 5083 | WHANGAREI | NEW ZEALAND
-----Original Message-----
From: Starlink <starlink-bounces at lists.bufferbloat.net> On Behalf Of Ulrich Speidel via Starlink
Sent: Wednesday, 31 August 2022 7:25 pm
To: Sebastian Moeller <moeller0 at gmx.de>; Ulrich Speidel via Starlink <starlink at lists.bufferbloat.net>
Subject: Re: [Starlink] Starlink "beam spread"

On 31/08/2022 6:26 pm, Sebastian Moeller wrote:
> Hi Ulrich,
>
> On 31 August 2022 00:50:35 CEST, Ulrich Speidel via Starlink 
> <starlink at lists.bufferbloat.net> wrote:
> >There's another aspect here that is often overlooked when looking
> purely at the data rate that you can get from your 
> fibre/cable/wifi/satellite, and this is where the data comes from.
> >
> >A large percentage of Internet content these days comes from content
> delivery networks (CDNs). These innately work on the assumption that 
> it's the core of the Internet that presents a bottleneck, and that the 
> aggregate bandwidth of all last mile connections is high in 
> comparison. A second assumption is that a large share of the content 
> that gets requested gets requested many times, and many times by users 
> in the same corner(s) of the Internet. The conclusion is that 
> therefore content is best served from a location close to the end 
> user, so as to keep RTTs low and - importantly - keep the load of long 
> distance bottleneck links.
> >
> >Now it's fairly clear that large numbers of fibres to end users make
> for the best kind of network between CDN and end user. Local WiFi 
> hotspots with limited range allow frequency re-use, as do ground based 
> cellular networks, so they're OK, too, in that respect.  But anything 
> that needs to project RF energy over a longer distance to get directly 
> to the end user hasn't got nature on its side.
> >
> >This is, IMHO, Starlink's biggest design flaw at the moment: Going
> direct to end user site rather providing a bridge to a local ISP may 
> be circumventing the lack of last mile infrastructure in the US, but 
> it also makes incredibly inefficient use of spectrum and satellite 
> resource. If every viral cat video that a thousand Starlink users in 
> Iowa are just dying to see literally has to go to space a thousand 
> times and back again rather than once, you arguably have a problem.
>
> Why? Internet access service is predominantly a service to transport 
> any packets the users send and request when they do so. Caching 
> content closer to the users or multicast tricks are basically 
> optimizations that (occasionally) help decrease costs/increase margins 
> for the operator, but IMHO they are exactly that, optimizations. So if 
> they can not be used, no harm is done. Since caching is not perfect, 
> such optimisations really are no way to safely increase the 
> oversubscription rate either. Mind you I am not saying such measures 
> are useless, but in IAS I consider them to be optional. Ideas about 
> caching in space seem a bit pie-in-the-sky (pun intended) since at 4ms 
> delay this would only help if operating CDNs in space would be cheaper 
> than on earth at the base station or if the ground to satellite 
> capacity was smaller then the aggregate satellite to end user capacity 
> (both per satellite), no?

Now, assuming for a moment that your typical Starlink user isn't so different from your average Internet user anywhere else in that they like to watch Netflix, YouTube, TikTok etc., then having a simple "transport layer and below" view of a system that's providing connectivity simply isn't enough.

The problem is that - Zoom meetings aside - the vast majority of data that enters an end user device these days comes from a CDN server somewhere. It's quietly gotten so pervasive that if a major CDN provider (or cloud service provider or however they like to refer to themselves these days) has an outage, the media will report - incorrectly of course
- that "the Internet is down". So it's not just something that's optional anymore, and hasn't been for a while. It's an integral part of the landscape. Access strata folk please take note!

This isn't a (huge) problem on classical mobile networks with base stations because of the amount of frequency division multiplexing you can do with a combination of high cell density, the ensuing ability to communicate with lower power, which enables spatial separation and hence frequency reuse. Add beam forming and a few other nice techniques, and you're fine. Same with WiFi, essentially. So as data emerges from a CDN server (remember, most of this is on demand unicast and not broadcasting), it'll initially go into relatively local fibre backbones (no bottleneck) and then either onto a fibre to the home, a DSL line, a WiFi system, or a terrestrial mobile 4G/5G/6G network, and none of these present a bottleneck at any one point.

This is different with satellites, including LEO and Starlink. If your CDN or origin server sits at the remote end of the satellite link as seen from the end users, then every copy of your cat video (again, assuming on-demand here) must transit the link each time it's requested, unless there's a cache on the local end that multiple users get their copies from. There is just no way around this. As such, the comparison of Starlink to GSM/LTE/5G base stations just doesn't work here.

So throw in the "edge" as in "edge" computing. In a direct-to-site satellite network, the edgiest bit of the edge is the satellite itself. 
If we put a CDN server (cache if you wish) there, then yes we have saved us the repeated use of the link on the uplink side. But we still have the downlink to contend with where the video will have to be transmitted for each user who wants it. This combines with the uncomfortable truth that an RF "beam" from a satellite isn't as selective as a laser beam, so the options for frequency re-use from orbit aren't anywhere near as good as from a mobile base station across the road: Any beam pointed at you can be heard for many miles around and therefore no other user can re-use that frequency (with the same burst slot etc.).

So by putting a cache on the server, you've reduced the need for multiple redundant transmissions overall by almost half, but this doesn't help much because you really need to cut that need by orders of magnitude. Moreover, there's another problem: Power. Running CDN servers is a power hungry business, as anyone running cloud data centres at scale will be happy to attest to (in Singapore, the drain on the power network from data centres got so bad that they banned new ones for a while). Unfortunately, power is the one thing a LEO satellite that's built to achieve minimum weight is going to have least of.

ICN essentially suffers from the same problem when it comes to Starlink
- if the information (cat video) you want is on the bird and it's unicast to you over an encrypted connection, then the video still needs to come down 1000 times if 1000 users want to watch it.

-- 

****************************************************************
Dr. Ulrich Speidel

School of Computer Science

Room 303S.594 (City Campus)

The University of Auckland
u.speidel at auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************



_______________________________________________
Starlink mailing list
Starlink at lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/private/starlink/attachments/20220831/33fd5e63/attachment-0001.html>


More information about the Starlink mailing list