From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id B50553CB37 for ; Wed, 31 Aug 2022 03:49:58 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1661932192; bh=U2T5KK8bh42g8Eu/UzpEX0d+jLtaOjqIgwtKNNsAeVo=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=N2qBQVBCktkGKGEHIfSw8jh6Tv1P8GqHJkSwF/OJKqGtenaWBPUPZBLRuURYk9vFj 0YHR4aKYG+Po76bVBR3xIypmqepQthWBw4FpMA6hriuLahWDQ7e3bvsIHhekMgh5C0 bqidpQ+c1eOKYpALIjgC6dYqC5bcxYk68/sOSPh8= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from smtpclient.apple ([134.76.241.253]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MY68d-1oucXC3uVp-00YTVL; Wed, 31 Aug 2022 09:49:51 +0200 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\)) From: Sebastian Moeller In-Reply-To: <2321be3b-957f-2d1f-c335-119c8e76efe5@auckland.ac.nz> Date: Wed, 31 Aug 2022 09:49:50 +0200 Cc: Ulrich Speidel via Starlink Content-Transfer-Encoding: quoted-printable Message-Id: <23E930C0-23A5-4ACC-BAB7-D057CD2D8572@gmx.de> References: <1661878433.14064713@apps.rackspace.com> <6p5n9262-3745-pq31-5636-1rnon987o255@ynat.uz> <20220830220710.GA2653@sunf10.rd.bbc.co.uk> <15982a40-2b34-7ed1-bfa3-bced03fc3839@auckland.ac.nz> <9CE05D69-FC37-4C97-9D8D-D46B2DF6DE16@gmx.de> <2321be3b-957f-2d1f-c335-119c8e76efe5@auckland.ac.nz> To: Ulrich Speidel X-Mailer: Apple Mail (2.3696.120.41.1.1) X-Provags-ID: V03:K1:CkW7xS7vuIOszngWERLbvRr0R8hUWTbBouf9hUC6WkdEL41bBt1 gOeYRYe6x7Q8PtONXo/MABwPG8WyhJtg/JyPgFhe/vu6eIrBE84wUsmKLJqFWZFtAP3qpof lJnSL014dJ/EC4HTGDw8k50huR5d8Lw63ZzHRpg1ITg5CbmAvtrygmP5wrklY8RNiWlc9I4 G3jYr1FlL18jaUHCQ5shw== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:5q6+WS7tOd0=:L5yvKQekTay4+QQjL5YKxo x40fwYJDi0g7Q4CYfn2D0cQ7wW1ymgHd3q/5TMKgj75jBtwp6fOPYrttw4IL8i+vFrYfEkIZQ vp6xjjIB/l/77UVkjQ6gnsIFD4ryH7J/nyxjSXFVynxqiX+jOoJ+sEQUcpAYDjLQ5pdUtvM3D ViiErVWDQFhADQVZ6/YjYwvdiwHMk9jgnUgG4zL/qDvEj8NdPAv9cUJBa55ziF7kKDLVvz79b Rfm/qwyeZDxVcPuoPrMguuWWIwCtJAYgE2Ce+mVf/3ZnPkOleCxaX/d30SGpNmD8VVFhjeZ/J 2v9fsMpW8pItww+BFKLjyxayGEFA0g90sBq1/0yV0qctAGBMS45g19XzwiEAlCppeuFJotAdq KfgWTNEZY6Q7KiXKedA5O8itOQnFMxE1F/5fZju8VW8dVOYrvU1OIAViuWq/0hMoa1rwTsBe7 EG3CtxG5nnfCcwb+hkiBQF3Wa6OKElVPVA9Ip3VFKUygHSftv3k3zlfr/M3yFvsHQKQRsGjtV kqCHlmE+Ts95SrqE4pJF3D2JXnJWMyIQDK1ZeIJImYx912UVCEniPIeWUQIChHvnltpdjM15w JgwXg7qDaSQAlyFqjSyyuumEKIlC8i1L0432HP/TT+1J8iPH+Rkmt6XKTvcSu8lLZAeLeco+3 9JJAKwDgWQekXrO9n0W+HHLFRLbdnu1RoaLO/1cKsWCu5kwMSOFK7QoEQJB1C6Yb/GeCzRSU4 Pk5pqNhtK9sayoSJrmMqZM681iifnzknPYDMYr3LZmIxCLjCvCw7r0xjRD5Unm/RkuHQEvu3S uy/LjAPJGGegyqW+OEL5bQqK1i//oPJND6IXH6bkR8arsT4/4gy/z1KSoXE3kkZB+ci2g0CPq e4QdTFF3srhbctK/5GSjA/q0RRmrVMvhVZ5Ta9hnUdFN8gLQwRNMkIU7v2HMmeeXKlfn10v/N ADOX7+d4ZSLB+q9CZqd325Sq01v2ZVEbC3fRnXv3V2wRasn3VNkDiNfE3+IgTxGD0XirtO9T1 9JIpX5aMDlt3pm0Ap9rCa6FIUOiB1Z+K3T+MrKIAHWVqhfGefO0sCsTC3hcAVtSKWIN+mMuS4 I9oEq08yqGpmUS8oNntQZeu433xCOuF/CzCz7bPtlbWpQfpbsbSIlPPgQ== Subject: Re: [Starlink] Starlink "beam spread" X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Aug 2022 07:49:59 -0000 Hi Ulrich, > On Aug 31, 2022, at 09:25, Ulrich Speidel = wrote: >=20 > On 31/08/2022 6:26 pm, Sebastian Moeller wrote: >> Hi Ulrich, >>=20 >> On 31 August 2022 00:50:35 CEST, Ulrich Speidel via Starlink = wrote: >> >There's another aspect here that is often overlooked when looking = purely at the data rate that you can get from your = fibre/cable/wifi/satellite, and this is where the data comes from. >> > >> >A large percentage of Internet content these days comes from content = delivery networks (CDNs). These innately work on the assumption that = it's the core of the Internet that presents a bottleneck, and that the = aggregate bandwidth of all last mile connections is high in comparison. = A second assumption is that a large share of the content that gets = requested gets requested many times, and many times by users in the same = corner(s) of the Internet. The conclusion is that therefore content is = best served from a location close to the end user, so as to keep RTTs = low and - importantly - keep the load of long distance bottleneck links. >> > >> >Now it's fairly clear that large numbers of fibres to end users make = for the best kind of network between CDN and end user. Local WiFi = hotspots with limited range allow frequency re-use, as do ground based = cellular networks, so they're OK, too, in that respect. But anything = that needs to project RF energy over a longer distance to get directly = to the end user hasn't got nature on its side. >> > >> >This is, IMHO, Starlink's biggest design flaw at the moment: Going = direct to end user site rather providing a bridge to a local ISP may be = circumventing the lack of last mile infrastructure in the US, but it = also makes incredibly inefficient use of spectrum and satellite = resource. If every viral cat video that a thousand Starlink users in = Iowa are just dying to see literally has to go to space a thousand times = and back again rather than once, you arguably have a problem. >>=20 >> Why? Internet access service is predominantly a service to transport = any packets the users send and request when they do so. Caching content = closer to the users or multicast tricks are basically optimizations that = (occasionally) help decrease costs/increase margins for the operator, = but IMHO they are exactly that, optimizations. So if they can not be = used, no harm is done. Since caching is not perfect, such optimisations = really are no way to safely increase the oversubscription rate either. = Mind you I am not saying such measures are useless, but in IAS I = consider them to be optional. Ideas about caching in space seem a bit = pie-in-the-sky (pun intended) since at 4ms delay this would only help if = operating CDNs in space would be cheaper than on earth at the base = station or if the ground to satellite capacity was smaller then the = aggregate satellite to end user capacity (both per satellite), no? >=20 > Now, assuming for a moment that your typical Starlink user isn't so = different from your average Internet user anywhere else in that they = like to watch Netflix, YouTube, TikTok etc., then having a simple = "transport layer and below" view of a system that's providing = connectivity simply isn't enough. Why? As I said, CDNs and Co. are (mostly economic) = optimizations, internet access service (IAS) really is a dumb pipe = service as little as ISPs enjoy that... > The problem is that - Zoom meetings aside - the vast majority of data = that enters an end user device these days comes from a CDN server = somewhere. Again CDNs optimize the fact that there is a considerable = overlap in the type of content users access, so that average usage = patterns become predictable enough that caching becomes a viable = strategy. But we only get these caches because one or more parties = actually profit from doing so; ISPs might sell colocation in AS-internal = data-centers for $money$, while content providers might save on their = total transport costs, by reducing the total bit-miles. But both are = inherently driven by the desire to increase revenue/surplus. > It's quietly gotten so pervasive that if a major CDN provider (or = cloud service provider or however they like to refer to themselves these = days) has an outage, the media will report - incorrectly of course - = that "the Internet is down". So it's not just something that's optional = anymore, and hasn't been for a while. It's an integral part of the = landscape. Access strata folk please take note! Well, that just means that the caching layer is too optimistic = and has pretty abysmal failure points; on average CDNs probably are = economically attractive enough that customers (the content providers who = pay the CDNs) just accept/tolerate the occasional outage (it is not that = big content prividers do not occasionally screw up on their side as = well). > This isn't a (huge) problem on classical mobile networks with base = stations because of the amount of frequency division multiplexing you = can do with a combination of high cell density, the ensuing ability to = communicate with lower power, which enables spatial separation and hence = frequency reuse. Add beam forming and a few other nice techniques, and = you're fine. Same with WiFi, essentially. So as data emerges from a CDN = server (remember, most of this is on demand unicast and not = broadcasting), it'll initially go into relatively local fibre backbones = (no bottleneck) and then either onto a fibre to the home, a DSL line, a = WiFi system, or a terrestrial mobile 4G/5G/6G network, and none of these = present a bottleneck at any one point. >=20 > This is different with satellites, including LEO and Starlink. If your = CDN or origin server sits at the remote end of the satellite link as = seen from the end users, then every copy of your cat video (again, = assuming on-demand here) must transit the link each time it's requested, = unless there's a cache on the local end that multiple users get their = copies from. There is just no way around this. As such, the comparison = of Starlink to GSM/LTE/5G base stations just doesn't work here. +1: I fully agree, but for such a cache to be wort-while a = single starlink link would need to supply enough users that their = aggregate consumption becomes predictable enough to make caching = effective enough, no? >=20 > So throw in the "edge" as in "edge" computing. In a direct-to-site = satellite network, the edgiest bit of the edge is the satellite itself. = If we put a CDN server (cache if you wish) there, then yes we have saved = us the repeated use of the link on the uplink side. But we still have = the downlink to contend with where the video will have to be transmitted = for each user who wants it. This combines with the uncomfortable truth = that an RF "beam" from a satellite isn't as selective as a laser beam, = so the options for frequency re-use from orbit aren't anywhere near as = good as from a mobile base station across the road: Any beam pointed at = you can be heard for many miles around and therefore no other user can = re-use that frequency (with the same burst slot etc.). Yes, I tried to imply that, putting servers in space does not = solve the load in the satellite problem. > So by putting a cache on the server, you've reduced the need for = multiple redundant transmissions overall by almost half, but this = doesn't help much because you really need to cut that need by orders of = magnitude. Worse, if aggregate CPE downlink =3D base station uplink, then = all we have now is some power saving as the base station might not need = to send (much). > Moreover, there's another problem: Power. Running CDN servers is a = power hungry business, as anyone running cloud data centres at scale = will be happy to attest to (in Singapore, the drain on the power network = from data centres got so bad that they banned new ones for a while). = Unfortunately, power is the one thing a LEO satellite that's built to = achieve minimum weight is going to have least of. There is another issue I believe, cooling, vacuum is a hell of = an isolator, so heat probably needs to be shed as IR light...=20 > ICN essentially suffers from the same problem when it comes to = Starlink - if the information (cat video) you want is on the bird and = it's unicast to you over an encrypted connection, then the video still = needs to come down 1000 times if 1000 users want to watch it. +1: I agree with that assessment. What could work = (pie-in-the-sky) is if the base station control the CDN nodes, they = could try to slot requests and try to see whether concurrent transfers = of the same content could not be synchronized and the unicast silently = converted into multicast between base station and dishy. But I have no = intuition whether that kind of synchronicity is realistic for everyting = but a few events like a soccer world cup final, a cricket test match, of = something like the superb owl finals series... (maybe such events are = massive enough that such an exercise might still be worth while, I do = not know). Regards Sebastian >=20 > --=20 >=20 > **************************************************************** > Dr. Ulrich Speidel >=20 > School of Computer Science >=20 > Room 303S.594 (City Campus) >=20 > The University of Auckland > u.speidel@auckland.ac.nz > http://www.cs.auckland.ac.nz/~ulrich/ > **************************************************************** >=20 >=20 >=20