From: David Lang <david@lang.hm>
To: Mark Handley <mark@handley.org.uk>
Cc: David Lang <david@lang.hm>, Nick Matthews <matthnick@gmail.com>,
Daniel AJ Sokolov <daniel@falco.ca>,
Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
Date: Thu, 26 Feb 2026 11:01:37 -0700 (MST) [thread overview]
Message-ID: <799n7004-2onr-63rr-1141-r3p5p1s93qn1@ynat.uz> (raw)
In-Reply-To: <a3c98afb-088e-45bc-9c8d-61cd2a3459de@app.fastmail.com>
Yep, that's what I was alluding to in my earlier post.
However, Elon Musk also has experience in building large, coherent training
clusters, plus the Dojo project, and as he has been talking about future
training clusters, he has been saying things that are against conventional
wisdom.
like using the AI5 processors for training as well as inference
like splitting things up across many satellites rather than being a single
coherent cluster (each satellite being a fraction of a rack-equivalent)
I'm not a fanboy who says he's always right (Dojo showed that if nothing else)
but betting against him getting something done (just not on his stated
timetable) has historically not been great odds. :-)
David Lang
Mark Handley wrote:
> AI datacenters effectively split into training and inference, and it's worth
> optimizing for one or the other. For training, you want as much compute in
> one low latency cluster as possible. A single GB200 rack (72 GPUs) is
> currently around 160kW, and a leading edge training cluster is now well north
> of 100,000 GPUs. A single VR200 rack will be ~250kW later this year. For
> inference, you typically neeed between a few and a few hundred GPUs (or
> equivalents - TPUs, Celebras, etc) interconnected.
>
> It's easy to see inference clusters, especially wafer-scale like Celebras,
> being capable of being put in orbit. But the problem there is low(ish)
> latency to customers is also a requirement, so that constrains the orbits you
> could use. And it's actually not hard to get terrestrial power if you can
> scatter large numbers of smaller inference clusters worldwide, which is what
> we do.
>
> It's really hard to see training clusters in orbit, not only for cooling
> reasons, but also because they have very high failure rates and require a lot
> of human maintenance. In our current supercomputers we're looking at more
> than a million optical links in one building, so there is a continuous rate of
> link failure and replacement. We are continuously replacing switches and GPU
> nodes. Now you can design for resilience, and we are already running a new
> network design that does this. But living on the leading edge of what's
> possible in compute and having low failure rates tend to be mutually
> incompatible. We are working with all our suppliers to reduce failure rates
> in the next-but-one generation by design - we'd love that because a
> non-trivial cause of failure is a technician fixing one fault and causing
> another. So there's a lot of hope to improve things, but there's nothing
> coming down the pipeline that would allow large training clusters to have
> leading edge performance and simultaneously run unmanned.
>
> Mark (currently doing supercomputer networks at OpenAI)
>
> On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
>> a couple comments in response (specifically applying to SpaceX
>>
>> they are working to reduce launch costs between 10x and 100x
>>
>> they are not just looking at sun synchronous orbit, but also at launching from
>> the moon into a moon-size earth orbit or solar orbit
>>
>> re: chip vulnerability to radiation
>>
>> chips have gotten MUCH smaller over the years, which in part makes it more
>> likely for a cell that's hit to flip, but also means that (for a given
>> capability) it's much smaller, so is far less likely to be hit.
>>
>> probability based systems don't require every calculation to be perfect
>>
>> "AI" systems need to be validated anyway (since their behavior can't be
>> predicted), so if there are too many errors, it just will fail validation
>>
>> with enough processing capacity, you can re-run the calculations and compare
>> resuts.
>>
>>
>> One thing I haven't seen people talk about is that space-based systems
>> are NOT
>> going to be massive, coherent clusters the way current AI training
>> clusters are.
>> They will be many smaller clusters with relatively low bandwidth/high
>> latency
>> communications between them (you can't send data faster than the speed
>> of
>> light). The first posts about space datacenters were dense, massive
>> things
>> (comparable in size to ground based systems) with solar panels and
>> radiators
>> measured in square miles. Elon and SpaceX are talking about many small
>> satellites in the 100Kw range, similar in size to the starlink
>> satellites that
>> Starship can deploy.
>>
>>
>>
>> I fully expect that new training algorithms will be found that will
>> drastically
>> improve the efficiency, but I also expect that when they are found,
>> those
>> companies with lots of hardware and expertise in running it will be
>> able to make
>> better use of the new algorithms, if only to train more models doing
>> different
>> things at the same time. It still favors those companies that get ahead
>> (and
>> don't collapse in the process)
>>
>> every bubble over-builds infrastructure, as a lot of people who lose
>> their
>> shirts jump on board the new fad without being able to evaluate the
>> companies.
>> But those companies that fail generally get bought out by others,
>> cheap, and the
>> infrastructure that is built gets used by someone else with a more
>> realistic
>> business model. It may take years (see the massive overbuilding of
>> fiber in some
>> areas), but it will eventually be used.
>>
>> I think there is disagreement on if AI is going to 'hockey stick' or not, but
>> even if it doesn't, thee are a lot of good uses for the pattern matching
>> capability (just not at today's prices)
>>
>> David Lang
>>
>>
>> On Wed, 25 Feb 2026, Nick Matthews wrote:
>>
>>> Date: Wed, 25 Feb 2026 19:38:52 -0700
>>> From: Nick Matthews <matthnick@gmail.com>
>>> To: David Lang <david@lang.hm>
>>> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
>>> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] Re: Data centers are racing to space — and regulation
>>> can’t keep up
>>>
>>> The underlying theory here is if someone builds a model that can improve
>>> itself faster than humans, they win. Military, economy, future problems,
>>> etc. That could have a lot of real on-Earth impact. There's investments and
>>> races going on that support that theory.
>>>
>>> If the major limiting factor is how big and fast you can build power plants
>>> on earth, and assume the person with the most access to power wins, it
>>> starts to make more sense.
>>>
>>> However, there's also a giant list of technical assumptions that need to be
>>> true for those assumptions to fly (get it?). And those technical
>>> assumptions don't necessarily need to be true in order to cash the checks
>>> from people that either want to compete in that race or invest in someone
>>> that is.
>>>
>>> Some of the assumptions I've come to include:
>>> * Adding more power and data to models eventually gets you to the
>>> intelligence needed to hockey stick. (Versus solving this problem with a
>>> different approach, algorithms, or different kinds of data.)
>>> * The models, data, and underlying algorithms aren't easily replicated by
>>> others once they start exponentially increasing in ability. E.g. can
>>> someone like Deepseek just take the outputs of the first mover, and then
>>> not require the same power capacity and replicate a similar value. This
>>> would slow down the first mover velocity benefits.
>>> * AI eventually starts creating returns.
>>> * Launch costs go down significantly (x10?)
>>> * There's enough room in sun synchronous orbit to run at power scales not
>>> possible on earth without kicking off Kessler
>>> * A combination of very large solar panels, radiative cooling, fluid
>>> exchange between them, the computing, propulsion, and any necessary
>>> redundancy of these components is still economical.
>>> * Operational loss due to radiation, micro asteroids, and general
>>> component failure is tolerable.
>>> * Components like GPUs and RAM and underlying bus structures can be built
>>> to be more radiation tolerant.
>>> * Burning up new orders of magnitude of amounts of elements in the
>>> atmosphere can be managed (aluminum, silicon, etc.)
>>> * Or, there is some amount of in-orbit recycling and manufacturing without
>>> returning material back to Earth.
>>> * Bandwidth can be built for 1) intra cluster within a satellite, 2) cross
>>> cluster via OISl, 3) Back to Earth using RF or lasers.
>>> * Regulatory bodies agree with the risk versus reward and approve this
>>> kind of plan.
>>> * The smarter-than-human AI doesn't decide to destroy the human race in a
>>> move of self preservation because the AI companies didn't have time for
>>> boundaries.
>>>
>>> I think it's a neat thought experiment, even if it's a little terrifying in
>>> scale and impact if it's remotely possible.
>>>
>>> -nick
>>>
>>> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> Daniel AJ Sokolov wrote:
>>>>
>>>>> Block spots in orbit
>>>>
>>>> at the scale that he operates, everyone else combined in in the noise.
>>>> Starlink
>>>> is already several times the number of other satellites in orbit combined.
>>>>
>>>> besides, in the long run, he's talking about launching from the moon into
>>>> solar
>>>> orbit, not earth orbit, but even if he was just talking about launching
>>>> into
>>>> earth orbit near the moon's orbit, it's not like there are very many
>>>> satellites
>>>> there to contend with.
>>>>
>>>>> From a technology point of view, this is bonkers.
>>>>
>>>> if you only look at technical details, you may be right, but if you add
>>>> the
>>>> regulatory burden and delays in building traditional datacenters, that may
>>>> be
>>>> enough to change the math.
>>>>
>>>> Now, if we could ease the regulations so that it's easier to build power
>>>> plants
>>>> and hook up to the grid (or get small next-gen nuclear power plants
>>>> operational
>>>> so they can be dropped at the datacenters), that could change the math
>>>> back.
>>>>
>>>> David Lang
>>>> _______________________________________________
>>>> Starlink mailing list -- starlink@lists.bufferbloat.net
>>>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>>>>
>>>
>> _______________________________________________
>> Starlink mailing list -- starlink@lists.bufferbloat.net
>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
next prev parent reply other threads:[~2026-02-26 18:01 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-25 14:05 [Starlink] Data centers are racing to space — and regulation can’t keep up Hesham ElBakoury
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
2026-02-25 14:32 ` [Starlink] Re: Data centers are racing to space ??? and regulation can???t " Gert Doering
2026-02-25 14:42 ` [Starlink] Re: Data centers are racing to space — and regulation can’t " Hesham ElBakoury
2026-02-26 4:28 ` J Pan
[not found] ` <CAFvDQ9p68AFJ5cQTpyx=HkA2Cf6r1m6F3ssaJh-OJK4kqK=PDQ@mail.gmail.com>
2026-02-26 5:54 ` J Pan
2026-02-26 6:01 ` Hesham ElBakoury
2026-02-25 14:50 ` Daniel AJ Sokolov
2026-02-26 1:33 ` David Lang
2026-02-26 2:38 ` Nick Matthews
2026-02-26 4:39 ` David Lang
2026-02-26 11:54 ` Mark Handley
2026-02-26 13:36 ` Vint Cerf
2026-02-26 13:56 ` Nitinder Mohan
2026-02-26 21:36 ` Ulrich Speidel
2026-02-26 23:02 ` Brandon Butterworth
2026-02-26 23:16 ` Nitinder Mohan
2026-02-26 23:44 ` Ulrich Speidel
2026-02-27 1:01 ` Joe Hamelin
2026-02-27 1:47 ` David Lang
2026-02-27 14:26 ` [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list) Sascha Meinrath
2026-02-27 15:07 ` [Starlink] " David Lang
2026-02-27 15:15 ` Daniel AJ Sokolov
2026-02-27 15:22 ` Gert Doering
2026-02-26 14:14 ` [Starlink] Re: Data centers are racing to space — and regulation can’t keep up Mark Handley
2026-02-26 18:01 ` David Lang [this message]
2026-02-25 20:26 ` Brandon Butterworth
2026-02-26 1:28 ` David Lang
2026-02-26 4:49 ` David Lang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/starlink.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=799n7004-2onr-63rr-1141-r3p5p1s93qn1@ynat.uz \
--to=david@lang.hm \
--cc=daniel@falco.ca \
--cc=mark@handley.org.uk \
--cc=matthnick@gmail.com \
--cc=starlink@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox