From mboxrd@z Thu Jan 1 00:00:00 1970 Authentication-Results: mail.toke.dk; spf=pass smtp.mailfrom=; dkim=pass header.d=handley.org.uk; arc=none (Message is not ARC signed); dmarc=pass (Used From Domain Record) header.from=handley.org.uk policy.dmarc=none Received: from fout-b2-smtp.messagingengine.com (fout-b2-smtp.messagingengine.com [202.12.124.145]) by mail.toke.dk (Postfix) with ESMTPS id 043ECD6C77C for ; Thu, 26 Feb 2026 12:55:17 +0100 (CET) Received: from phl-compute-03.internal (phl-compute-03.internal [10.202.2.43]) by mailfout.stl.internal (Postfix) with ESMTP id AD0F01D001DC; Thu, 26 Feb 2026 06:55:15 -0500 (EST) Received: from phl-imap-13 ([10.202.2.103]) by phl-compute-03.internal (MEProxy); Thu, 26 Feb 2026 06:55:15 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=handley.org.uk; h=cc:cc:content-transfer-encoding:content-type:content-type :date:date:from:from:in-reply-to:in-reply-to:message-id :mime-version:references:reply-to:subject:subject:to:to; s=fm2; t=1772106915; x=1772193315; bh=1roexacP7w3By5um0bYNmvww2GV7nJKY ILv9iHN2IaY=; b=YWeepS75ERVvvEuZdgLHNikPVjduVXjKMUYheCllTydpONWG O4C7wrE3IJ8hljPufCVeBjWfDfAWXaFE1o7uBLr/iKd+x+0YHQr0V0Syi1LeCPaH eYUKMACezVw/51/gasDeVCoEOgtr64omGQndAlvUXTXckzeVelwDcj768lTB6tfa ZQaCvsP3TLc+M5YxqKMq2ADN+00oGaH+x22LtpHr+2aCIJ8wNwBMseXL5FF9s3JP M5GzKzgXGCiFPyDOp0UhY0XoOAiUVjh8msxrtOZ/YmRQx8kr6gNz+tRnnWWVVvQC kp6CWMciQCoXv1CrhgT2m7Z6DMicXrQ5AMe2qw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1772106915; x= 1772193315; bh=1roexacP7w3By5um0bYNmvww2GV7nJKYILv9iHN2IaY=; b=n sSUhCew9e/62y/Ha/TE9rYK+bZ2qo3h0JHM9Edn7eYGEdCk5k4o1yHwhkc7G8k6M hoVYq6M/yGdw//FgcYJ5QnIL1Drb7B96sxkiJRz5D78C+iRhtACGLNTvG17dp+Ke 1wShBo4xVsNPZ2fFVRNkMEJoU06G0+41aNMwaiExqQH3AEq5zFFN7Yw2rymzGAeh Yi6EgJCDiGDE6BjmLzhy5lPi/hEkrIEFtdF6vPJQU5Mx10Bq/pUXA//sCQt7/Yzd y+RE5DjDqzAvPDmdUURO/6j3A05L+FvKh7c+Is4uWzI4sZ56ZqDpKWuMN1KnwlQF DJbZHQt2KuGTaVtaGiRGA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefgedrtddtgddvgeehleejucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhepofggfffhvfevkfgjfhfutgfgsehtqhertdertdejnecuhfhrohhmpedfofgrrhhk ucfjrghnughlvgihfdcuoehmrghrkheshhgrnhgulhgvhidrohhrghdruhhkqeenucggtf frrghtthgvrhhnpeevjeejteelffehtefhgeekveevudejkeevgfekjeeuieekhffhjeff ueduieetfeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhroh hmpehmrghrkheshhgrnhgulhgvhidrohhrghdruhhkpdhnsggprhgtphhtthhopeegpdhm ohguvgepshhmthhpohhuthdprhgtphhtthhopegurghnihgvlhesfhgrlhgtohdrtggrpd hrtghpthhtohepmhgrthhthhhnihgtkhesghhmrghilhdrtghomhdprhgtphhtthhopegu rghvihgusehlrghnghdrhhhmpdhrtghpthhtohepshhtrghrlhhinhhksehlihhsthhsrd gsuhhffhgvrhgslhhorghtrdhnvght X-ME-Proxy: Feedback-ID: i4d08408c:Fastmail Received: by mailuser.phl.internal (Postfix, from userid 501) id F08DC33E0099; Thu, 26 Feb 2026 06:55:14 -0500 (EST) X-Mailer: MessagingEngine.com Webmail Interface MIME-Version: 1.0 X-ThreadId: AluvZ44zLZMB Date: Thu, 26 Feb 2026 11:54:54 +0000 From: "Mark Handley" To: "David Lang" , "Nick Matthews" Cc: "Daniel AJ Sokolov" , "Dave Taht via Starlink" Message-Id: In-Reply-To: <69q3s580-8560-q213-2n61-o36qns8o6q4o@ynat.uz> References: <4f2228ec-042a-48ed-8946-a6c37f10ca94@rogers.com> <2o0r5r7s-0750-o2p1-7738-n4n88q9093qs@ynat.uz> <69q3s580-8560-q213-2n61-o36qns8o6q4o@ynat.uz> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Message-ID-Hash: L6ROJP2HDXPRJ7WVRYINNYBBC6ZKXYIM X-Message-ID-Hash: L6ROJP2HDXPRJ7WVRYINNYBBC6ZKXYIM X-MailFrom: mark@handley.org.uk X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list Subject: [Starlink] =?utf-8?q?Re=3A_Data_centers_are_racing_to_space_=E2=80=94_and_regulation_can=E2=80=99t_keep_up?= List-Id: "Starlink has bufferbloat. Bad." Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: AI datacenters effectively split into training and inference, and it's w= orth optimizing for one or the other. For training, you want as much co= mpute in one low latency cluster as possible. A single GB200 rack (72 G= PUs) is currently around 160kW, and a leading edge training cluster is n= ow well north of 100,000 GPUs. A single VR200 rack will be ~250kW later= this year. For inference, you typically neeed between a few and a few = hundred GPUs (or equivalents - TPUs, Celebras, etc) interconnected. =20 It's easy to see inference clusters, especially wafer-scale like Celebra= s, being capable of being put in orbit. But the problem there is low(is= h) latency to customers is also a requirement, so that constrains the or= bits you could use. And it's actually not hard to get terrestrial power= if you can scatter large numbers of smaller inference clusters worldwid= e, which is what we do. It's really hard to see training clusters in orbit, not only for cooling= reasons, but also because they have very high failure rates and require= a lot of human maintenance. In our current supercomputers we're lookin= g at more than a million optical links in one building, so there is a co= ntinuous rate of link failure and replacement. We are continuously repl= acing switches and GPU nodes. Now you can design for resilience, and w= e are already running a new network design that does this. But living o= n the leading edge of what's possible in compute and having low failure = rates tend to be mutually incompatible. We are working with all our sup= pliers to reduce failure rates in the next-but-one generation by design = - we'd love that because a non-trivial cause of failure is a technician = fixing one fault and causing another. So there's a lot of hope to impro= ve things, but there's nothing coming down the pipeline that would allow= large training clusters to have leading edge performance and simultaneo= usly run unmanned. Mark (currently doing supercomputer networks at OpenAI) On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote: > a couple comments in response (specifically applying to SpaceX > > they are working to reduce launch costs between 10x and 100x > > they are not just looking at sun synchronous orbit, but also at launch= ing from=20 > the moon into a moon-size earth orbit or solar orbit > > re: chip vulnerability to radiation > > chips have gotten MUCH smaller over the years, which in part makes it = more=20 > likely for a cell that's hit to flip, but also means that (for a given=20 > capability) it's much smaller, so is far less likely to be hit. > > probability based systems don't require every calculation to be perfect > > "AI" systems need to be validated anyway (since their behavior can't b= e=20 > predicted), so if there are too many errors, it just will fail validat= ion > > with enough processing capacity, you can re-run the calculations and c= ompare=20 > resuts. > > > One thing I haven't seen people talk about is that space-based systems=20 > are NOT=20 > going to be massive, coherent clusters the way current AI training=20 > clusters are.=20 > They will be many smaller clusters with relatively low bandwidth/high=20 > latency=20 > communications between them (you can't send data faster than the speed=20 > of=20 > light). The first posts about space datacenters were dense, massive=20 > things=20 > (comparable in size to ground based systems) with solar panels and=20 > radiators=20 > measured in square miles. Elon and SpaceX are talking about many small=20 > satellites in the 100Kw range, similar in size to the starlink=20 > satellites that=20 > Starship can deploy. > > > > I fully expect that new training algorithms will be found that will=20 > drastically=20 > improve the efficiency, but I also expect that when they are found,=20 > those=20 > companies with lots of hardware and expertise in running it will be=20 > able to make=20 > better use of the new algorithms, if only to train more models doing=20 > different=20 > things at the same time. It still favors those companies that get ahea= d=20 > (and=20 > don't collapse in the process) > > every bubble over-builds infrastructure, as a lot of people who lose=20 > their=20 > shirts jump on board the new fad without being able to evaluate the=20 > companies.=20 > But those companies that fail generally get bought out by others,=20 > cheap, and the=20 > infrastructure that is built gets used by someone else with a more=20 > realistic=20 > business model. It may take years (see the massive overbuilding of=20 > fiber in some=20 > areas), but it will eventually be used. > > I think there is disagreement on if AI is going to 'hockey stick' or n= ot, but=20 > even if it doesn't, thee are a lot of good uses for the pattern matchi= ng=20 > capability (just not at today's prices) > > David Lang > > > On Wed, 25 Feb 2026, Nick Matthews wrote: > >> Date: Wed, 25 Feb 2026 19:38:52 -0700 >> From: Nick Matthews >> To: David Lang >> Cc: Daniel AJ Sokolov , >> Dave Taht via Starlink >> Subject: Re: [Starlink] Re: Data centers are racing to space =E2=80=94= and regulation >> can=E2=80=99t keep up >>=20 >> The underlying theory here is if someone builds a model that can impr= ove >> itself faster than humans, they win. Military, economy, future proble= ms, >> etc. That could have a lot of real on-Earth impact. There's investmen= ts and >> races going on that support that theory. >> >> If the major limiting factor is how big and fast you can build power = plants >> on earth, and assume the person with the most access to power wins, it >> starts to make more sense. >> >> However, there's also a giant list of technical assumptions that need= to be >> true for those assumptions to fly (get it?). And those technical >> assumptions don't necessarily need to be true in order to cash the ch= ecks >> from people that either want to compete in that race or invest in som= eone >> that is. >> >> Some of the assumptions I've come to include: >> * Adding more power and data to models eventually gets you to the >> intelligence needed to hockey stick. (Versus solving this problem wit= h a >> different approach, algorithms, or different kinds of data.) >> * The models, data, and underlying algorithms aren't easily replicate= d by >> others once they start exponentially increasing in ability. E.g. can >> someone like Deepseek just take the outputs of the first mover, and t= hen >> not require the same power capacity and replicate a similar value. Th= is >> would slow down the first mover velocity benefits. >> * AI eventually starts creating returns. >> * Launch costs go down significantly (x10?) >> * There's enough room in sun synchronous orbit to run at power scales= not >> possible on earth without kicking off Kessler >> * A combination of very large solar panels, radiative cooling, fluid >> exchange between them, the computing, propulsion, and any necessary >> redundancy of these components is still economical. >> * Operational loss due to radiation, micro asteroids, and general >> component failure is tolerable. >> * Components like GPUs and RAM and underlying bus structures can be b= uilt >> to be more radiation tolerant. >> * Burning up new orders of magnitude of amounts of elements in the >> atmosphere can be managed (aluminum, silicon, etc.) >> * Or, there is some amount of in-orbit recycling and manufacturing wi= thout >> returning material back to Earth. >> * Bandwidth can be built for 1) intra cluster within a satellite, 2) = cross >> cluster via OISl, 3) Back to Earth using RF or lasers. >> * Regulatory bodies agree with the risk versus reward and approve this >> kind of plan. >> * The smarter-than-human AI doesn't decide to destroy the human race = in a >> move of self preservation because the AI companies didn't have time f= or >> boundaries. >> >> I think it's a neat thought experiment, even if it's a little terrify= ing in >> scale and impact if it's remotely possible. >> >> -nick >> >> On Wed, Feb 25, 2026, 6:33=E2=80=AFPM David Lang via Starlink < >> starlink@lists.bufferbloat.net> wrote: >> >>> Daniel AJ Sokolov wrote: >>> >>>> Block spots in orbit >>> >>> at the scale that he operates, everyone else combined in in the nois= e. >>> Starlink >>> is already several times the number of other satellites in orbit com= bined. >>> >>> besides, in the long run, he's talking about launching from the moon= into >>> solar >>> orbit, not earth orbit, but even if he was just talking about launch= ing >>> into >>> earth orbit near the moon's orbit, it's not like there are very many >>> satellites >>> there to contend with. >>> >>>> From a technology point of view, this is bonkers. >>> >>> if you only look at technical details, you may be right, but if you = add >>> the >>> regulatory burden and delays in building traditional datacenters, th= at may >>> be >>> enough to change the math. >>> >>> Now, if we could ease the regulations so that it's easier to build p= ower >>> plants >>> and hook up to the grid (or get small next-gen nuclear power plants >>> operational >>> so they can be dropped at the datacenters), that could change the ma= th >>> back. >>> >>> David Lang >>> _______________________________________________ >>> Starlink mailing list -- starlink@lists.bufferbloat.net >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net >>> >> > _______________________________________________ > Starlink mailing list -- starlink@lists.bufferbloat.net > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net