From: Nitinder Mohan <N.Mohan@tudelft.nl>
To: Vint Cerf <vint@google.com>, Mark Handley <mark@handley.org.uk>
Cc: David Lang <david@lang.hm>, Nick Matthews <matthnick@gmail.com>,
Daniel AJ Sokolov <daniel@falco.ca>,
Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
Date: Thu, 26 Feb 2026 13:56:24 +0000 [thread overview]
Message-ID: <VI1PR09MB2621888AE575127753C4707B8D72A@VI1PR09MB2621.eurprd09.prod.outlook.com> (raw)
In-Reply-To: <CAHxHggfztVwmokqY0NifCuEm5R3K+k09_nOEH-gfOT=_yYWFnA@mail.gmail.com>
This has been a great discussion so far! Thanks all!
We recently held a Dagstuhl seminar (26062, "Connected Space")<https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/26062> bringing together researchers from academia, industry, and space agencies to work through exactly these questions. We are still preparing the report and I will summarize the key findings in TheNetworkingChannel panel<https://networkingchannel.eu/connected-space-challenges-and-opportunities-in-satellite-computing-and-networking/> <https://networkingchannel.eu/connected-space-challenges-and-opportunities-in-satellite-computing-and-networking/> in a few weeks (register to catch that). In the meantime, let me share a few insights on the discussions here.
1.
The downlink bottleneck is the real motivation for space computing, not replacing ground data centers. Satellites with high-fidelity sensors collect on the order of terabytes per orbit but can transmit only tens of gigabytes per ground pass. For deep space missions the situation is far worse, with Mars orbiters returning roughly 1 percent of captured data. This data disparity makes onboard computation an engineering necessity for filtering and prioritizing what gets sent down. In LEO, the benefits are limited but we converged that it can be beneficial to do AI-in-space for space-generated data (see point 2 and 5).
2.
Orbital data centers serving Earth-based users face fundamental physics constraints, exactly as this thread has identified. Heat dissipation is the hardest problem. Satellites can only radiate heat, and available radiator surface area is strictly limited. Unlike ground facilities with active cooling, there is a hard thermodynamic ceiling on how much computation any individual satellite can sustain. The seminar reached strong consensus that the "data center in space" concept for general Earth-centric workloads is not validated, and the sustainability math does not currently work out. Published analysis presented at the seminar showed that the CO2 footprint of launching computing hardware to orbit via current-generation rockets far exceeds that of operating equivalent terrestrial facilities.
3.
The inference vs. training distinction raised in this thread maps precisely to what we found. Training clusters are essentially ruled out for space. They require massive low-latency interconnects, continuous human maintenance for hardware failures, and power densities incompatible with orbital constraints. Inference is more plausible in principle given smaller cluster sizes, but the latency and handover problems raised in this thread are real and unsolved. As objects in orbit disappear over the horizon, maintaining session continuity with an inference engine requires either store-and-forward (introducing variable, potentially large latency) or instantiating equivalent state on the next satellite coming into view, which is an open distributed systems problem with no proven solution at scale.
4.
The appropriate model is distributed, not centralized. Rather than attempting to replicate terrestrial-scale data centers in orbit, the seminar converged on distributed computing across constellations as the viable paradigm. Many smaller satellites each performing focused preprocessing, filtering, and classification, then coordinating results. This distributes thermal loads and power requirements while matching the physical reality of orbital mechanics.
5.
Lightweight, purpose-built AI is what works in space, not LLMs. The seminar found clear consensus that large language models and heavy transformer architectures are inappropriate for orbital deployment given power, thermal, and radiation constraints. What does work are custom convolutional neural networks optimized for specific tasks (cloud detection, anomaly identification, object tracking) that can run within tight power and time budgets. There is one can use the LEO- or MEO- based AI data centers for processing remote sensing generated data because they are very large hyperspectral images and these satellites have limited point contact and transfer times with the Earth. Of course, this brings up the question of inter-constellation connectivity, which itself is an interesting research direction.
6.
The COTS hardware shift is real but comes with tradeoffs. The space industry is moving away from radiation-hardened legacy processors toward commercial off-the-shelf components with appropriate fault tolerance and shielding. This dramatically improves available compute performance. But as noted in this thread, radiation effects on modern small-geometry chips are a genuine concern, and the approach works only when you have enough redundancy across a constellation to tolerate individual failures.
7.
The points about regulation and launch costs cutting both ways are well taken. The seminar also spent significant time on policy and sustainability. Concerns were raised about the prospect of massive constellations deployed for AI purposes, including debris risks, atmospheric effects from re-entry, and whether the space industry is repeating historical patterns of overbuilding driven by competition rather than validated demand. The sustainability question remains genuinely open and needs rigorous full-lifecycle accounting that does not yet exist.
For anyone interested, I wrote a short summary of the seminar findings here: https://spearlab.nl/news/2026-02-10-dagstuhl-connected-space-seminar
Thanks and Regards,
Nitinder Mohan
Assistant Professor
Head of SPEAR Lab, Networked Systems Group
TU Delft, Netherlands
Personal website: https://www.nitindermohan.com/
Lab website: https://spearlab.nl/
From: Vint Cerf via Starlink <starlink@lists.bufferbloat.net>
Date: Thursday, 26 February 2026 at 14:37
To: Mark Handley <mark@handley.org.uk>
Cc: David Lang <david@lang.hm>, Nick Matthews <matthnick@gmail.com>, Daniel AJ Sokolov <daniel@falco.ca>, Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
maybe this has already been addressed but things in orbit disappear over
the horizon. If you rely on store/forward relay to keep in touch with an
inferencing engine while in orbit, you will experience variable latency. I
don't see how you could easily instantiate the same inferencing on the next
data center to come into view. Seems to me this is a very different
computing and communication environment than ground-based data centers on
the terrestrial Internet.
What am I missing?
v
On Thu, Feb 26, 2026 at 6:55 AM Mark Handley via Starlink <
starlink@lists.bufferbloat.net> wrote:
> AI datacenters effectively split into training and inference, and it's
> worth optimizing for one or the other. For training, you want as much
> compute in one low latency cluster as possible. A single GB200 rack (72
> GPUs) is currently around 160kW, and a leading edge training cluster is now
> well north of 100,000 GPUs. A single VR200 rack will be ~250kW later this
> year. For inference, you typically neeed between a few and a few hundred
> GPUs (or equivalents - TPUs, Celebras, etc) interconnected.
>
> It's easy to see inference clusters, especially wafer-scale like Celebras,
> being capable of being put in orbit. But the problem there is low(ish)
> latency to customers is also a requirement, so that constrains the orbits
> you could use. And it's actually not hard to get terrestrial power if you
> can scatter large numbers of smaller inference clusters worldwide, which is
> what we do.
>
> It's really hard to see training clusters in orbit, not only for cooling
> reasons, but also because they have very high failure rates and require a
> lot of human maintenance. In our current supercomputers we're looking at
> more than a million optical links in one building, so there is a continuous
> rate of link failure and replacement. We are continuously replacing
> switches and GPU nodes. Now you can design for resilience, and we are
> already running a new network design that does this. But living on the
> leading edge of what's possible in compute and having low failure rates
> tend to be mutually incompatible. We are working with all our suppliers to
> reduce failure rates in the next-but-one generation by design - we'd love
> that because a non-trivial cause of failure is a technician fixing one
> fault and causing another. So there's a lot of hope to improve things, but
> there's nothing coming down the pipeline that would allow large training
> clusters to have leading edge performance and simultaneously run unmanned.
>
> Mark (currently doing supercomputer networks at OpenAI)
>
> On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
> > a couple comments in response (specifically applying to SpaceX
> >
> > they are working to reduce launch costs between 10x and 100x
> >
> > they are not just looking at sun synchronous orbit, but also at
> launching from
> > the moon into a moon-size earth orbit or solar orbit
> >
> > re: chip vulnerability to radiation
> >
> > chips have gotten MUCH smaller over the years, which in part makes it
> more
> > likely for a cell that's hit to flip, but also means that (for a given
> > capability) it's much smaller, so is far less likely to be hit.
> >
> > probability based systems don't require every calculation to be perfect
> >
> > "AI" systems need to be validated anyway (since their behavior can't be
> > predicted), so if there are too many errors, it just will fail validation
> >
> > with enough processing capacity, you can re-run the calculations and
> compare
> > resuts.
> >
> >
> > One thing I haven't seen people talk about is that space-based systems
> > are NOT
> > going to be massive, coherent clusters the way current AI training
> > clusters are.
> > They will be many smaller clusters with relatively low bandwidth/high
> > latency
> > communications between them (you can't send data faster than the speed
> > of
> > light). The first posts about space datacenters were dense, massive
> > things
> > (comparable in size to ground based systems) with solar panels and
> > radiators
> > measured in square miles. Elon and SpaceX are talking about many small
> > satellites in the 100Kw range, similar in size to the starlink
> > satellites that
> > Starship can deploy.
> >
> >
> >
> > I fully expect that new training algorithms will be found that will
> > drastically
> > improve the efficiency, but I also expect that when they are found,
> > those
> > companies with lots of hardware and expertise in running it will be
> > able to make
> > better use of the new algorithms, if only to train more models doing
> > different
> > things at the same time. It still favors those companies that get ahead
> > (and
> > don't collapse in the process)
> >
> > every bubble over-builds infrastructure, as a lot of people who lose
> > their
> > shirts jump on board the new fad without being able to evaluate the
> > companies.
> > But those companies that fail generally get bought out by others,
> > cheap, and the
> > infrastructure that is built gets used by someone else with a more
> > realistic
> > business model. It may take years (see the massive overbuilding of
> > fiber in some
> > areas), but it will eventually be used.
> >
> > I think there is disagreement on if AI is going to 'hockey stick' or
> not, but
> > even if it doesn't, thee are a lot of good uses for the pattern matching
> > capability (just not at today's prices)
> >
> > David Lang
> >
> >
> > On Wed, 25 Feb 2026, Nick Matthews wrote:
> >
> >> Date: Wed, 25 Feb 2026 19:38:52 -0700
> >> From: Nick Matthews <matthnick@gmail.com>
> >> To: David Lang <david@lang.hm>
> >> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
> >> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> >> Subject: Re: [Starlink] Re: Data centers are racing to space — and
> regulation
> >> can’t keep up
> >>
> >> The underlying theory here is if someone builds a model that can improve
> >> itself faster than humans, they win. Military, economy, future problems,
> >> etc. That could have a lot of real on-Earth impact. There's investments
> and
> >> races going on that support that theory.
> >>
> >> If the major limiting factor is how big and fast you can build power
> plants
> >> on earth, and assume the person with the most access to power wins, it
> >> starts to make more sense.
> >>
> >> However, there's also a giant list of technical assumptions that need
> to be
> >> true for those assumptions to fly (get it?). And those technical
> >> assumptions don't necessarily need to be true in order to cash the
> checks
> >> from people that either want to compete in that race or invest in
> someone
> >> that is.
> >>
> >> Some of the assumptions I've come to include:
> >> * Adding more power and data to models eventually gets you to the
> >> intelligence needed to hockey stick. (Versus solving this problem with a
> >> different approach, algorithms, or different kinds of data.)
> >> * The models, data, and underlying algorithms aren't easily replicated
> by
> >> others once they start exponentially increasing in ability. E.g. can
> >> someone like Deepseek just take the outputs of the first mover, and then
> >> not require the same power capacity and replicate a similar value. This
> >> would slow down the first mover velocity benefits.
> >> * AI eventually starts creating returns.
> >> * Launch costs go down significantly (x10?)
> >> * There's enough room in sun synchronous orbit to run at power scales
> not
> >> possible on earth without kicking off Kessler
> >> * A combination of very large solar panels, radiative cooling, fluid
> >> exchange between them, the computing, propulsion, and any necessary
> >> redundancy of these components is still economical.
> >> * Operational loss due to radiation, micro asteroids, and general
> >> component failure is tolerable.
> >> * Components like GPUs and RAM and underlying bus structures can be
> built
> >> to be more radiation tolerant.
> >> * Burning up new orders of magnitude of amounts of elements in the
> >> atmosphere can be managed (aluminum, silicon, etc.)
> >> * Or, there is some amount of in-orbit recycling and manufacturing
> without
> >> returning material back to Earth.
> >> * Bandwidth can be built for 1) intra cluster within a satellite, 2)
> cross
> >> cluster via OISl, 3) Back to Earth using RF or lasers.
> >> * Regulatory bodies agree with the risk versus reward and approve this
> >> kind of plan.
> >> * The smarter-than-human AI doesn't decide to destroy the human race in
> a
> >> move of self preservation because the AI companies didn't have time for
> >> boundaries.
> >>
> >> I think it's a neat thought experiment, even if it's a little
> terrifying in
> >> scale and impact if it's remotely possible.
> >>
> >> -nick
> >>
> >> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
> >> starlink@lists.bufferbloat.net> wrote:
> >>
> >>> Daniel AJ Sokolov wrote:
> >>>
> >>>> Block spots in orbit
> >>>
> >>> at the scale that he operates, everyone else combined in in the noise.
> >>> Starlink
> >>> is already several times the number of other satellites in orbit
> combined.
> >>>
> >>> besides, in the long run, he's talking about launching from the moon
> into
> >>> solar
> >>> orbit, not earth orbit, but even if he was just talking about launching
> >>> into
> >>> earth orbit near the moon's orbit, it's not like there are very many
> >>> satellites
> >>> there to contend with.
> >>>
> >>>> From a technology point of view, this is bonkers.
> >>>
> >>> if you only look at technical details, you may be right, but if you add
> >>> the
> >>> regulatory burden and delays in building traditional datacenters, that
> may
> >>> be
> >>> enough to change the math.
> >>>
> >>> Now, if we could ease the regulations so that it's easier to build
> power
> >>> plants
> >>> and hook up to the grid (or get small next-gen nuclear power plants
> >>> operational
> >>> so they can be dropped at the datacenters), that could change the math
> >>> back.
> >>>
> >>> David Lang
> >>> _______________________________________________
> >>> Starlink mailing list -- starlink@lists.bufferbloat.net
> >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> >>>
> >>
> > _______________________________________________
> > Starlink mailing list -- starlink@lists.bufferbloat.net
> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
--
Please send any postal/overnight deliveries to:
Vint Cerf
Google, LLC
1900 Reston Metro Plaza, 16th Floor
Reston, VA 20190
+1 (571) 213 1346
until further notice
_______________________________________________
Starlink mailing list -- starlink@lists.bufferbloat.net
To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
next prev parent reply other threads:[~2026-02-26 13:56 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-25 14:05 [Starlink] Data centers are racing to space — and regulation can’t keep up Hesham ElBakoury
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
2026-02-25 14:32 ` [Starlink] Re: Data centers are racing to space ??? and regulation can???t " Gert Doering
2026-02-25 14:42 ` [Starlink] Re: Data centers are racing to space — and regulation can’t " Hesham ElBakoury
2026-02-26 4:28 ` J Pan
[not found] ` <CAFvDQ9p68AFJ5cQTpyx=HkA2Cf6r1m6F3ssaJh-OJK4kqK=PDQ@mail.gmail.com>
2026-02-26 5:54 ` J Pan
2026-02-26 6:01 ` Hesham ElBakoury
2026-02-25 14:50 ` Daniel AJ Sokolov
2026-02-26 1:33 ` David Lang
2026-02-26 2:38 ` Nick Matthews
2026-02-26 4:39 ` David Lang
2026-02-26 11:54 ` Mark Handley
2026-02-26 13:36 ` Vint Cerf
2026-02-26 13:56 ` Nitinder Mohan [this message]
2026-02-26 21:36 ` Ulrich Speidel
2026-02-26 23:02 ` Brandon Butterworth
2026-02-26 23:16 ` Nitinder Mohan
2026-02-26 23:44 ` Ulrich Speidel
2026-02-27 1:01 ` Joe Hamelin
2026-02-27 1:47 ` David Lang
2026-02-27 14:26 ` [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list) Sascha Meinrath
2026-02-27 15:07 ` [Starlink] " David Lang
2026-02-27 15:15 ` Daniel AJ Sokolov
2026-02-27 15:22 ` Gert Doering
2026-02-26 14:14 ` [Starlink] Re: Data centers are racing to space — and regulation can’t keep up Mark Handley
2026-02-26 18:01 ` David Lang
2026-02-25 20:26 ` Brandon Butterworth
2026-02-26 1:28 ` David Lang
2026-02-26 4:49 ` David Lang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/starlink.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=VI1PR09MB2621888AE575127753C4707B8D72A@VI1PR09MB2621.eurprd09.prod.outlook.com \
--to=n.mohan@tudelft.nl \
--cc=daniel@falco.ca \
--cc=david@lang.hm \
--cc=mark@handley.org.uk \
--cc=matthnick@gmail.com \
--cc=starlink@lists.bufferbloat.net \
--cc=vint@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox