[Starlink] Starlink "beam spread"

Sebastian Moeller moeller0 at gmx.de
Wed Aug 31 03:49:50 EDT 2022


Hi Ulrich,

> On Aug 31, 2022, at 09:25, Ulrich Speidel <u.speidel at auckland.ac.nz> wrote:
> 
> On 31/08/2022 6:26 pm, Sebastian Moeller wrote:
>> Hi Ulrich,
>> 
>> On 31 August 2022 00:50:35 CEST, Ulrich Speidel via Starlink <starlink at lists.bufferbloat.net> wrote:
>> >There's another aspect here that is often overlooked when looking purely at the data rate that you can get from your fibre/cable/wifi/satellite, and this is where the data comes from.
>> >
>> >A large percentage of Internet content these days comes from content delivery networks (CDNs). These innately work on the assumption that it's the core of the Internet that presents a bottleneck, and that the aggregate bandwidth of all last mile connections is high in comparison. A second assumption is that a large share of the content that gets requested gets requested many times, and many times by users in the same corner(s) of the Internet. The conclusion is that therefore content is best served from a location close to the end user, so as to keep RTTs low and - importantly - keep the load of long distance bottleneck links.
>> >
>> >Now it's fairly clear that large numbers of fibres to end users make for the best kind of network between CDN and end user. Local WiFi hotspots with limited range allow frequency re-use, as do ground based cellular networks, so they're OK, too, in that respect.  But anything that needs to project RF energy over a longer distance to get directly to the end user hasn't got nature on its side.
>> >
>> >This is, IMHO, Starlink's biggest design flaw at the moment: Going direct to end user site rather providing a bridge to a local ISP may be circumventing the lack of last mile infrastructure in the US, but it also makes incredibly inefficient use of spectrum and satellite resource. If every viral cat video that a thousand Starlink users in Iowa are just dying to see literally has to go to space a thousand times and back again rather than once, you arguably have a problem.
>> 
>> Why? Internet access service is predominantly a service to transport any packets the users send and request when they do so. Caching content closer to the users or multicast tricks are basically optimizations that (occasionally) help decrease costs/increase margins for the operator, but IMHO they are exactly that, optimizations. So if they can not be used, no harm is done. Since caching is not perfect, such optimisations really are no way to safely increase the oversubscription rate either. Mind you I am not saying such measures are useless, but in IAS I consider them to be optional. Ideas about caching in space seem a bit pie-in-the-sky (pun intended) since at 4ms delay this would only help if operating CDNs in space would be cheaper than on earth at the base station or if the ground to satellite capacity was smaller then the aggregate satellite to end user capacity (both per satellite), no?
> 
> Now, assuming for a moment that your typical Starlink user isn't so different from your average Internet user anywhere else in that they like to watch Netflix, YouTube, TikTok etc., then having a simple "transport layer and below" view of a system that's providing connectivity simply isn't enough.

	Why? As I said, CDNs and Co. are (mostly economic) optimizations, internet access service (IAS) really is a dumb pipe service as little as ISPs enjoy that...


> The problem is that - Zoom meetings aside - the vast majority of data that enters an end user device these days comes from a CDN server somewhere.

	Again CDNs optimize the fact that there is a considerable overlap in the type of content users access, so that average usage patterns become predictable enough that caching becomes a viable strategy. But we only get these caches because one or more parties actually profit from doing so; ISPs might sell colocation in AS-internal data-centers for $money$, while content providers might save on their total transport costs, by reducing the total bit-miles. But both are inherently driven by the desire to increase revenue/surplus.


> It's quietly gotten so pervasive that if a major CDN provider (or cloud service provider or however they like to refer to themselves these days) has an outage, the media will report - incorrectly of course - that "the Internet is down". So it's not just something that's optional anymore, and hasn't been for a while. It's an integral part of the landscape. Access strata folk please take note!

	Well, that just means that the caching layer is too optimistic and has pretty abysmal failure points; on average CDNs probably are economically attractive enough that customers (the content providers who pay the CDNs) just accept/tolerate the occasional outage (it is not that big content prividers do not occasionally screw up on their side as well).

> This isn't a (huge) problem on classical mobile networks with base stations because of the amount of frequency division multiplexing you can do with a combination of high cell density, the ensuing ability to communicate with lower power, which enables spatial separation and hence frequency reuse. Add beam forming and a few other nice techniques, and you're fine. Same with WiFi, essentially. So as data emerges from a CDN server (remember, most of this is on demand unicast and not broadcasting), it'll initially go into relatively local fibre backbones (no bottleneck) and then either onto a fibre to the home, a DSL line, a WiFi system, or a terrestrial mobile 4G/5G/6G network, and none of these present a bottleneck at any one point.
> 
> This is different with satellites, including LEO and Starlink. If your CDN or origin server sits at the remote end of the satellite link as seen from the end users, then every copy of your cat video (again, assuming on-demand here) must transit the link each time it's requested, unless there's a cache on the local end that multiple users get their copies from. There is just no way around this. As such, the comparison of Starlink to GSM/LTE/5G base stations just doesn't work here.

	+1: I fully agree, but for such a cache to be wort-while a single starlink link would need to supply enough users that their aggregate consumption becomes predictable enough to make caching effective enough, no?

> 
> So throw in the "edge" as in "edge" computing. In a direct-to-site satellite network, the edgiest bit of the edge is the satellite itself. If we put a CDN server (cache if you wish) there, then yes we have saved us the repeated use of the link on the uplink side. But we still have the downlink to contend with where the video will have to be transmitted for each user who wants it. This combines with the uncomfortable truth that an RF "beam" from a satellite isn't as selective as a laser beam, so the options for frequency re-use from orbit aren't anywhere near as good as from a mobile base station across the road: Any beam pointed at you can be heard for many miles around and therefore no other user can re-use that frequency (with the same burst slot etc.).

	Yes, I tried to imply that, putting servers in space does not solve the load in the satellite problem.

> So by putting a cache on the server, you've reduced the need for multiple redundant transmissions overall by almost half, but this doesn't help much because you really need to cut that need by orders of magnitude.

	Worse, if aggregate CPE downlink = base station uplink, then all we have now is some power saving as the base station might not need to send (much).


> Moreover, there's another problem: Power. Running CDN servers is a power hungry business, as anyone running cloud data centres at scale will be happy to attest to (in Singapore, the drain on the power network from data centres got so bad that they banned new ones for a while). Unfortunately, power is the one thing a LEO satellite that's built to achieve minimum weight is going to have least of.

	There is another issue I believe, cooling, vacuum is a hell of an isolator, so heat probably needs to be shed as IR light... 


> ICN essentially suffers from the same problem when it comes to Starlink - if the information (cat video) you want is on the bird and it's unicast to you over an encrypted connection, then the video still needs to come down 1000 times if 1000 users want to watch it.

	+1: I agree with that assessment. What could work (pie-in-the-sky) is if the base station control the CDN nodes, they could try to slot requests and try to see whether concurrent transfers of the same content could not be synchronized and the unicast silently converted into multicast between base station and dishy. But I have no intuition whether that kind of synchronicity is realistic for everyting but a few events like a soccer world cup final, a cricket test match, of something like the superb owl finals series... (maybe such events are massive enough that such an exercise might still be worth while, I do not know).

Regards
	Sebastian


> 
> -- 
> 
> ****************************************************************
> Dr. Ulrich Speidel
> 
> School of Computer Science
> 
> Room 303S.594 (City Campus)
> 
> The University of Auckland
> u.speidel at auckland.ac.nz
> http://www.cs.auckland.ac.nz/~ulrich/
> ****************************************************************
> 
> 
> 



More information about the Starlink mailing list