* [Starlink] Data centers are racing to space — and regulation can’t keep up
@ 2026-02-25 14:05 Hesham ElBakoury
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
2026-02-26 4:49 ` David Lang
0 siblings, 2 replies; 29+ messages in thread
From: Hesham ElBakoury @ 2026-02-25 14:05 UTC (permalink / raw)
To: 5grm-satellite, Dave Taht via Starlink
https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
Hesham
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:05 [Starlink] Data centers are racing to space — and regulation can’t keep up Hesham ElBakoury
@ 2026-02-25 14:30 ` David Collier-Brown
2026-02-25 14:32 ` [Starlink] Re: Data centers are racing to space ??? and regulation can???t " Gert Doering
` (4 more replies)
2026-02-26 4:49 ` David Lang
1 sibling, 5 replies; 29+ messages in thread
From: David Collier-Brown @ 2026-02-25 14:30 UTC (permalink / raw)
To: starlink
I looked at the radiative cooling problem, and immediately thought ...
"these things can't possibly work. What is the hidden agenda here?"
Anyone know of alternative reasons for this project? It's from Mr Musk,
so weird is quite possible.
--dave
On 2/25/26 09:05, Hesham ElBakoury via Starlink wrote:
> https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
>
>
> Hesham
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
davecb@spamcop.net | -- Mark Twain
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space ??? and regulation can???t keep up
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
@ 2026-02-25 14:32 ` Gert Doering
2026-02-25 14:42 ` [Starlink] Re: Data centers are racing to space — and regulation can’t " Hesham ElBakoury
` (3 subsequent siblings)
4 siblings, 0 replies; 29+ messages in thread
From: Gert Doering @ 2026-02-25 14:32 UTC (permalink / raw)
To: David Collier-Brown; +Cc: starlink
Hi,
On Wed, Feb 25, 2026 at 09:30:41AM -0500, David Collier-Brown via Starlink wrote:
> Anyone know of alternative reasons for this project? It's from Mr Musk, so
> weird is quite possible.
Get his questionable content out of reach of national governments that
would rather not have it available in their countries...
Gert Doering
-- NetMaster
--
have you enabled IPv6 on something today...?
SpaceNet AG Vorstand: Sebastian v. Bomhard,
Karin Schuler, Sebastian Cler
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: Dr. Frank Thiäner
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
2026-02-25 14:32 ` [Starlink] Re: Data centers are racing to space ??? and regulation can???t " Gert Doering
@ 2026-02-25 14:42 ` Hesham ElBakoury
2026-02-26 4:28 ` J Pan
2026-02-25 14:50 ` Daniel AJ Sokolov
` (2 subsequent siblings)
4 siblings, 1 reply; 29+ messages in thread
From: Hesham ElBakoury @ 2026-02-25 14:42 UTC (permalink / raw)
To: David Collier-Brown; +Cc: Dave Taht via Starlink
This article says it is horrible idea:
https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/
Hesham
On Wed, Feb 25, 2026, 6:30 AM David Collier-Brown via Starlink <
starlink@lists.bufferbloat.net> wrote:
> I looked at the radiative cooling problem, and immediately thought ...
> "these things can't possibly work. What is the hidden agenda here?"
>
> Anyone know of alternative reasons for this project? It's from Mr Musk,
> so weird is quite possible.
>
> --dave
>
> On 2/25/26 09:05, Hesham ElBakoury via Starlink wrote:
> >
> https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
> >
> >
> > Hesham
> > _______________________________________________
> > Starlink mailing list -- starlink@lists.bufferbloat.net
> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
> --
> David Collier-Brown, | Always do right. This will gratify
> System Programmer and Author | some people and astonish the rest
> davecb@spamcop.net | -- Mark Twain
>
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
2026-02-25 14:32 ` [Starlink] Re: Data centers are racing to space ??? and regulation can???t " Gert Doering
2026-02-25 14:42 ` [Starlink] Re: Data centers are racing to space — and regulation can’t " Hesham ElBakoury
@ 2026-02-25 14:50 ` Daniel AJ Sokolov
2026-02-26 1:33 ` David Lang
2026-02-25 20:26 ` Brandon Butterworth
2026-02-26 1:28 ` David Lang
4 siblings, 1 reply; 29+ messages in thread
From: Daniel AJ Sokolov @ 2026-02-25 14:50 UTC (permalink / raw)
To: starlink
On 2/25/26 at 15:30, David Collier-Brown via Starlink wrote:
> I looked at the radiative cooling problem, and immediately thought ...
> "these things can't possibly work. What is the hidden agenda here?"
Block spots in orbit
Raise more money at the IPO
PR
From a technology point of view, this is bonkers.
Cheers
Daniel
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
` (2 preceding siblings ...)
2026-02-25 14:50 ` Daniel AJ Sokolov
@ 2026-02-25 20:26 ` Brandon Butterworth
2026-02-26 1:28 ` David Lang
4 siblings, 0 replies; 29+ messages in thread
From: Brandon Butterworth @ 2026-02-25 20:26 UTC (permalink / raw)
To: David Collier-Brown, starlink
On 25/02/2026 14:30:41, "David Collier-Brown via Starlink"
<starlink@lists.bufferbloat.net> wrote:
>Anyone know of alternative reasons for this project? It's from Mr Musk, so weird is quite possible.
Figured he could speed run and end run Neuromancer at the
the same time
brandon
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
` (3 preceding siblings ...)
2026-02-25 20:26 ` Brandon Butterworth
@ 2026-02-26 1:28 ` David Lang
4 siblings, 0 replies; 29+ messages in thread
From: David Lang @ 2026-02-26 1:28 UTC (permalink / raw)
To: David Collier-Brown; +Cc: starlink
David Collier-Brown via Starlink wrote:
> I looked at the radiative cooling problem, and immediately thought ... "these
> things can't possibly work. What is the hidden agenda here?"
That was my initial thinking, but Elon made a post about the difficulty in
getting grid hookups (regulation hoops and delays) that greatly increase the
costs and time needed to build a datacenter.
Elon saw this before anyone else and bought up pretty much all the gas turbine
generators and transformers available. The manufacturers are now quoting lead
times of several years to get new ones, but are resisting increasing their
production capacity (they see this as a bubble and aren't willing to spend money
on bigger production facilities that will be extra capacity after the bubble
ends)
Since he initially talked about the problems in hooking up to the grid, I've
seen a lot of information from other sources confirming the problem.
David Lang
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:50 ` Daniel AJ Sokolov
@ 2026-02-26 1:33 ` David Lang
2026-02-26 2:38 ` Nick Matthews
0 siblings, 1 reply; 29+ messages in thread
From: David Lang @ 2026-02-26 1:33 UTC (permalink / raw)
To: Daniel AJ Sokolov; +Cc: starlink
Daniel AJ Sokolov wrote:
> Block spots in orbit
at the scale that he operates, everyone else combined in in the noise. Starlink
is already several times the number of other satellites in orbit combined.
besides, in the long run, he's talking about launching from the moon into solar
orbit, not earth orbit, but even if he was just talking about launching into
earth orbit near the moon's orbit, it's not like there are very many satellites
there to contend with.
> From a technology point of view, this is bonkers.
if you only look at technical details, you may be right, but if you add the
regulatory burden and delays in building traditional datacenters, that may be
enough to change the math.
Now, if we could ease the regulations so that it's easier to build power plants
and hook up to the grid (or get small next-gen nuclear power plants operational
so they can be dropped at the datacenters), that could change the math back.
David Lang
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 1:33 ` David Lang
@ 2026-02-26 2:38 ` Nick Matthews
2026-02-26 4:39 ` David Lang
0 siblings, 1 reply; 29+ messages in thread
From: Nick Matthews @ 2026-02-26 2:38 UTC (permalink / raw)
To: David Lang; +Cc: Daniel AJ Sokolov, Dave Taht via Starlink
The underlying theory here is if someone builds a model that can improve
itself faster than humans, they win. Military, economy, future problems,
etc. That could have a lot of real on-Earth impact. There's investments and
races going on that support that theory.
If the major limiting factor is how big and fast you can build power plants
on earth, and assume the person with the most access to power wins, it
starts to make more sense.
However, there's also a giant list of technical assumptions that need to be
true for those assumptions to fly (get it?). And those technical
assumptions don't necessarily need to be true in order to cash the checks
from people that either want to compete in that race or invest in someone
that is.
Some of the assumptions I've come to include:
* Adding more power and data to models eventually gets you to the
intelligence needed to hockey stick. (Versus solving this problem with a
different approach, algorithms, or different kinds of data.)
* The models, data, and underlying algorithms aren't easily replicated by
others once they start exponentially increasing in ability. E.g. can
someone like Deepseek just take the outputs of the first mover, and then
not require the same power capacity and replicate a similar value. This
would slow down the first mover velocity benefits.
* AI eventually starts creating returns.
* Launch costs go down significantly (x10?)
* There's enough room in sun synchronous orbit to run at power scales not
possible on earth without kicking off Kessler
* A combination of very large solar panels, radiative cooling, fluid
exchange between them, the computing, propulsion, and any necessary
redundancy of these components is still economical.
* Operational loss due to radiation, micro asteroids, and general
component failure is tolerable.
* Components like GPUs and RAM and underlying bus structures can be built
to be more radiation tolerant.
* Burning up new orders of magnitude of amounts of elements in the
atmosphere can be managed (aluminum, silicon, etc.)
* Or, there is some amount of in-orbit recycling and manufacturing without
returning material back to Earth.
* Bandwidth can be built for 1) intra cluster within a satellite, 2) cross
cluster via OISl, 3) Back to Earth using RF or lasers.
* Regulatory bodies agree with the risk versus reward and approve this
kind of plan.
* The smarter-than-human AI doesn't decide to destroy the human race in a
move of self preservation because the AI companies didn't have time for
boundaries.
I think it's a neat thought experiment, even if it's a little terrifying in
scale and impact if it's remotely possible.
-nick
On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
starlink@lists.bufferbloat.net> wrote:
> Daniel AJ Sokolov wrote:
>
> > Block spots in orbit
>
> at the scale that he operates, everyone else combined in in the noise.
> Starlink
> is already several times the number of other satellites in orbit combined.
>
> besides, in the long run, he's talking about launching from the moon into
> solar
> orbit, not earth orbit, but even if he was just talking about launching
> into
> earth orbit near the moon's orbit, it's not like there are very many
> satellites
> there to contend with.
>
> > From a technology point of view, this is bonkers.
>
> if you only look at technical details, you may be right, but if you add
> the
> regulatory burden and delays in building traditional datacenters, that may
> be
> enough to change the math.
>
> Now, if we could ease the regulations so that it's easier to build power
> plants
> and hook up to the grid (or get small next-gen nuclear power plants
> operational
> so they can be dropped at the datacenters), that could change the math
> back.
>
> David Lang
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:42 ` [Starlink] Re: Data centers are racing to space — and regulation can’t " Hesham ElBakoury
@ 2026-02-26 4:28 ` J Pan
[not found] ` <CAFvDQ9p68AFJ5cQTpyx=HkA2Cf6r1m6F3ssaJh-OJK4kqK=PDQ@mail.gmail.com>
0 siblings, 1 reply; 29+ messages in thread
From: J Pan @ 2026-02-26 4:28 UTC (permalink / raw)
To: Hesham ElBakoury; +Cc: David Collier-Brown, Dave Taht via Starlink
nice discussion and an insightful article too. also a position slide
for a panel on the connected space use cases at the recent dagstuhl
seminar---some food for thought as well. cheers. -j
--
J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA, Web.UVic.CA/~pan
On Wed, Feb 25, 2026 at 6:43 AM Hesham ElBakoury via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> This article says it is horrible idea:
> https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/
>
> Hesham
>
> On Wed, Feb 25, 2026, 6:30 AM David Collier-Brown via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
> > I looked at the radiative cooling problem, and immediately thought ...
> > "these things can't possibly work. What is the hidden agenda here?"
> >
> > Anyone know of alternative reasons for this project? It's from Mr Musk,
> > so weird is quite possible.
> >
> > --dave
> >
> > On 2/25/26 09:05, Hesham ElBakoury via Starlink wrote:
> > >
> > https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
> > >
> > >
> > > Hesham
> > > _______________________________________________
> > > Starlink mailing list -- starlink@lists.bufferbloat.net
> > > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> >
> > --
> > David Collier-Brown, | Always do right. This will gratify
> > System Programmer and Author | some people and astonish the rest
> > davecb@spamcop.net | -- Mark Twain
> >
> > _______________________________________________
> > Starlink mailing list -- starlink@lists.bufferbloat.net
> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> >
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 2:38 ` Nick Matthews
@ 2026-02-26 4:39 ` David Lang
2026-02-26 11:54 ` Mark Handley
0 siblings, 1 reply; 29+ messages in thread
From: David Lang @ 2026-02-26 4:39 UTC (permalink / raw)
To: Nick Matthews; +Cc: David Lang, Daniel AJ Sokolov, Dave Taht via Starlink
a couple comments in response (specifically applying to SpaceX
they are working to reduce launch costs between 10x and 100x
they are not just looking at sun synchronous orbit, but also at launching from
the moon into a moon-size earth orbit or solar orbit
re: chip vulnerability to radiation
chips have gotten MUCH smaller over the years, which in part makes it more
likely for a cell that's hit to flip, but also means that (for a given
capability) it's much smaller, so is far less likely to be hit.
probability based systems don't require every calculation to be perfect
"AI" systems need to be validated anyway (since their behavior can't be
predicted), so if there are too many errors, it just will fail validation
with enough processing capacity, you can re-run the calculations and compare
resuts.
One thing I haven't seen people talk about is that space-based systems are NOT
going to be massive, coherent clusters the way current AI training clusters are.
They will be many smaller clusters with relatively low bandwidth/high latency
communications between them (you can't send data faster than the speed of
light). The first posts about space datacenters were dense, massive things
(comparable in size to ground based systems) with solar panels and radiators
measured in square miles. Elon and SpaceX are talking about many small
satellites in the 100Kw range, similar in size to the starlink satellites that
Starship can deploy.
I fully expect that new training algorithms will be found that will drastically
improve the efficiency, but I also expect that when they are found, those
companies with lots of hardware and expertise in running it will be able to make
better use of the new algorithms, if only to train more models doing different
things at the same time. It still favors those companies that get ahead (and
don't collapse in the process)
every bubble over-builds infrastructure, as a lot of people who lose their
shirts jump on board the new fad without being able to evaluate the companies.
But those companies that fail generally get bought out by others, cheap, and the
infrastructure that is built gets used by someone else with a more realistic
business model. It may take years (see the massive overbuilding of fiber in some
areas), but it will eventually be used.
I think there is disagreement on if AI is going to 'hockey stick' or not, but
even if it doesn't, thee are a lot of good uses for the pattern matching
capability (just not at today's prices)
David Lang
On Wed, 25 Feb 2026, Nick Matthews wrote:
> Date: Wed, 25 Feb 2026 19:38:52 -0700
> From: Nick Matthews <matthnick@gmail.com>
> To: David Lang <david@lang.hm>
> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Re: Data centers are racing to space — and regulation
> can’t keep up
>
> The underlying theory here is if someone builds a model that can improve
> itself faster than humans, they win. Military, economy, future problems,
> etc. That could have a lot of real on-Earth impact. There's investments and
> races going on that support that theory.
>
> If the major limiting factor is how big and fast you can build power plants
> on earth, and assume the person with the most access to power wins, it
> starts to make more sense.
>
> However, there's also a giant list of technical assumptions that need to be
> true for those assumptions to fly (get it?). And those technical
> assumptions don't necessarily need to be true in order to cash the checks
> from people that either want to compete in that race or invest in someone
> that is.
>
> Some of the assumptions I've come to include:
> * Adding more power and data to models eventually gets you to the
> intelligence needed to hockey stick. (Versus solving this problem with a
> different approach, algorithms, or different kinds of data.)
> * The models, data, and underlying algorithms aren't easily replicated by
> others once they start exponentially increasing in ability. E.g. can
> someone like Deepseek just take the outputs of the first mover, and then
> not require the same power capacity and replicate a similar value. This
> would slow down the first mover velocity benefits.
> * AI eventually starts creating returns.
> * Launch costs go down significantly (x10?)
> * There's enough room in sun synchronous orbit to run at power scales not
> possible on earth without kicking off Kessler
> * A combination of very large solar panels, radiative cooling, fluid
> exchange between them, the computing, propulsion, and any necessary
> redundancy of these components is still economical.
> * Operational loss due to radiation, micro asteroids, and general
> component failure is tolerable.
> * Components like GPUs and RAM and underlying bus structures can be built
> to be more radiation tolerant.
> * Burning up new orders of magnitude of amounts of elements in the
> atmosphere can be managed (aluminum, silicon, etc.)
> * Or, there is some amount of in-orbit recycling and manufacturing without
> returning material back to Earth.
> * Bandwidth can be built for 1) intra cluster within a satellite, 2) cross
> cluster via OISl, 3) Back to Earth using RF or lasers.
> * Regulatory bodies agree with the risk versus reward and approve this
> kind of plan.
> * The smarter-than-human AI doesn't decide to destroy the human race in a
> move of self preservation because the AI companies didn't have time for
> boundaries.
>
> I think it's a neat thought experiment, even if it's a little terrifying in
> scale and impact if it's remotely possible.
>
> -nick
>
> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> Daniel AJ Sokolov wrote:
>>
>>> Block spots in orbit
>>
>> at the scale that he operates, everyone else combined in in the noise.
>> Starlink
>> is already several times the number of other satellites in orbit combined.
>>
>> besides, in the long run, he's talking about launching from the moon into
>> solar
>> orbit, not earth orbit, but even if he was just talking about launching
>> into
>> earth orbit near the moon's orbit, it's not like there are very many
>> satellites
>> there to contend with.
>>
>>> From a technology point of view, this is bonkers.
>>
>> if you only look at technical details, you may be right, but if you add
>> the
>> regulatory burden and delays in building traditional datacenters, that may
>> be
>> enough to change the math.
>>
>> Now, if we could ease the regulations so that it's easier to build power
>> plants
>> and hook up to the grid (or get small next-gen nuclear power plants
>> operational
>> so they can be dropped at the datacenters), that could change the math
>> back.
>>
>> David Lang
>> _______________________________________________
>> Starlink mailing list -- starlink@lists.bufferbloat.net
>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>>
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-25 14:05 [Starlink] Data centers are racing to space — and regulation can’t keep up Hesham ElBakoury
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
@ 2026-02-26 4:49 ` David Lang
1 sibling, 0 replies; 29+ messages in thread
From: David Lang @ 2026-02-26 4:49 UTC (permalink / raw)
To: Hesham ElBakoury; +Cc: 5grm-satellite, Dave Taht via Starlink
Hesham ElBakoury wrote:
> https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
sorry, not sympathetic to countries that try to control the Internet not being
able to participate (especially those who can't provide the power on earth
anyway)
But it would probably be cheaper for them to buy SpaceX stellites (owning, not
renting) and pay SpaceX to manage them than to try and build out their own
datacenters (on earth or in space)
The vast majority of countries that have 'you can only use computers in our
country to process data from our country' are doing so to enforce censorship,
which gets even less sympathy from me.
A country does want to ensure they are not dependent on a significant rival for
National Security reasons, but as long as there are multiple competing countries
to buy capcity from.
David Lang
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
[not found] ` <CAFvDQ9p68AFJ5cQTpyx=HkA2Cf6r1m6F3ssaJh-OJK4kqK=PDQ@mail.gmail.com>
@ 2026-02-26 5:54 ` J Pan
2026-02-26 6:01 ` Hesham ElBakoury
0 siblings, 1 reply; 29+ messages in thread
From: J Pan @ 2026-02-26 5:54 UTC (permalink / raw)
To: Hesham ElBakoury; +Cc: Dave Taht via Starlink, Nitinder Mohan
Hi Hesham: thanks for the interest and we are still working on
it---will be announced here i believe when it is available. cheers.
-j
--
J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA, Web.UVic.CA/~pan
On Wed, Feb 25, 2026 at 8:50 PM Hesham ElBakoury <helbakoury@gmail.com> wrote:
>
> Hi J,
> Please send me the report of this dagstuhl seminar
>
> Thanks
> Hesham
>
> On Wed, Feb 25, 2026, 8:29 PM J Pan <Pan@uvic.ca> wrote:
>>
>> nice discussion and an insightful article too. also a position slide
>> for a panel on the connected space use cases at the recent dagstuhl
>> seminar---some food for thought as well. cheers. -j
>> --
>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA, Web.UVic.CA/~pan
>>
>> On Wed, Feb 25, 2026 at 6:43 AM Hesham ElBakoury via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> > This article says it is horrible idea:
>> > https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/
>> >
>> > Hesham
>> >
>> > On Wed, Feb 25, 2026, 6:30 AM David Collier-Brown via Starlink <
>> > starlink@lists.bufferbloat.net> wrote:
>> >
>> > > I looked at the radiative cooling problem, and immediately thought ...
>> > > "these things can't possibly work. What is the hidden agenda here?"
>> > >
>> > > Anyone know of alternative reasons for this project? It's from Mr Musk,
>> > > so weird is quite possible.
>> > >
>> > > --dave
>> > >
>> > > On 2/25/26 09:05, Hesham ElBakoury via Starlink wrote:
>> > > >
>> > > https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
>> > > >
>> > > >
>> > > > Hesham
>> > > > _______________________________________________
>> > > > Starlink mailing list -- starlink@lists.bufferbloat.net
>> > > > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>> > >
>> > > --
>> > > David Collier-Brown, | Always do right. This will gratify
>> > > System Programmer and Author | some people and astonish the rest
>> > > davecb@spamcop.net | -- Mark Twain
>> > >
>> > > _______________________________________________
>> > > Starlink mailing list -- starlink@lists.bufferbloat.net
>> > > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>> > >
>> > _______________________________________________
>> > Starlink mailing list -- starlink@lists.bufferbloat.net
>> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 5:54 ` J Pan
@ 2026-02-26 6:01 ` Hesham ElBakoury
0 siblings, 0 replies; 29+ messages in thread
From: Hesham ElBakoury @ 2026-02-26 6:01 UTC (permalink / raw)
To: J Pan; +Cc: Dave Taht via Starlink, Nitinder Mohan
Great. Thanks J.
Please let us know when the report will be available.
Hesham
On Wed, Feb 25, 2026, 9:54 PM J Pan <Pan@uvic.ca> wrote:
> Hi Hesham: thanks for the interest and we are still working on
> it---will be announced here i believe when it is available. cheers.
> -j
> --
> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA,
> Web.UVic.CA/~pan
> On Wed, Feb 25, 2026 at 8:50 PM Hesham ElBakoury <helbakoury@gmail.com>
> wrote:
> >
> > Hi J,
> > Please send me the report of this dagstuhl seminar
> >
> > Thanks
> > Hesham
> >
> > On Wed, Feb 25, 2026, 8:29 PM J Pan <Pan@uvic.ca> wrote:
> >>
> >> nice discussion and an insightful article too. also a position slide
> >> for a panel on the connected space use cases at the recent dagstuhl
> >> seminar---some food for thought as well. cheers. -j
> >> --
> >> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM), Pan@UVic.CA,
> Web.UVic.CA/~pan
> >>
> >> On Wed, Feb 25, 2026 at 6:43 AM Hesham ElBakoury via Starlink
> >> <starlink@lists.bufferbloat.net> wrote:
> >> >
> >> > This article says it is horrible idea:
> >> >
> https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/
> >> >
> >> > Hesham
> >> >
> >> > On Wed, Feb 25, 2026, 6:30 AM David Collier-Brown via Starlink <
> >> > starlink@lists.bufferbloat.net> wrote:
> >> >
> >> > > I looked at the radiative cooling problem, and immediately thought
> ...
> >> > > "these things can't possibly work. What is the hidden agenda here?"
> >> > >
> >> > > Anyone know of alternative reasons for this project? It's from Mr
> Musk,
> >> > > so weird is quite possible.
> >> > >
> >> > > --dave
> >> > >
> >> > > On 2/25/26 09:05, Hesham ElBakoury via Starlink wrote:
> >> > > >
> >> > >
> https://restofworld.org/2026/orbital-data-centers-ai-sovereignty/?utm_source=tldrnewsletter
> >> > > >
> >> > > >
> >> > > > Hesham
> >> > > > _______________________________________________
> >> > > > Starlink mailing list -- starlink@lists.bufferbloat.net
> >> > > > To unsubscribe send an email to
> starlink-leave@lists.bufferbloat.net
> >> > >
> >> > > --
> >> > > David Collier-Brown, | Always do right. This will gratify
> >> > > System Programmer and Author | some people and astonish the rest
> >> > > davecb@spamcop.net | -- Mark Twain
> >> > >
> >> > > _______________________________________________
> >> > > Starlink mailing list -- starlink@lists.bufferbloat.net
> >> > > To unsubscribe send an email to
> starlink-leave@lists.bufferbloat.net
> >> > >
> >> > _______________________________________________
> >> > Starlink mailing list -- starlink@lists.bufferbloat.net
> >> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 4:39 ` David Lang
@ 2026-02-26 11:54 ` Mark Handley
2026-02-26 13:36 ` Vint Cerf
2026-02-26 18:01 ` David Lang
0 siblings, 2 replies; 29+ messages in thread
From: Mark Handley @ 2026-02-26 11:54 UTC (permalink / raw)
To: David Lang, Nick Matthews; +Cc: Daniel AJ Sokolov, Dave Taht via Starlink
AI datacenters effectively split into training and inference, and it's worth optimizing for one or the other. For training, you want as much compute in one low latency cluster as possible. A single GB200 rack (72 GPUs) is currently around 160kW, and a leading edge training cluster is now well north of 100,000 GPUs. A single VR200 rack will be ~250kW later this year. For inference, you typically neeed between a few and a few hundred GPUs (or equivalents - TPUs, Celebras, etc) interconnected.
It's easy to see inference clusters, especially wafer-scale like Celebras, being capable of being put in orbit. But the problem there is low(ish) latency to customers is also a requirement, so that constrains the orbits you could use. And it's actually not hard to get terrestrial power if you can scatter large numbers of smaller inference clusters worldwide, which is what we do.
It's really hard to see training clusters in orbit, not only for cooling reasons, but also because they have very high failure rates and require a lot of human maintenance. In our current supercomputers we're looking at more than a million optical links in one building, so there is a continuous rate of link failure and replacement. We are continuously replacing switches and GPU nodes. Now you can design for resilience, and we are already running a new network design that does this. But living on the leading edge of what's possible in compute and having low failure rates tend to be mutually incompatible. We are working with all our suppliers to reduce failure rates in the next-but-one generation by design - we'd love that because a non-trivial cause of failure is a technician fixing one fault and causing another. So there's a lot of hope to improve things, but there's nothing coming down the pipeline that would allow large training clusters to have leading edge performance and simultaneously run unmanned.
Mark (currently doing supercomputer networks at OpenAI)
On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
> a couple comments in response (specifically applying to SpaceX
>
> they are working to reduce launch costs between 10x and 100x
>
> they are not just looking at sun synchronous orbit, but also at launching from
> the moon into a moon-size earth orbit or solar orbit
>
> re: chip vulnerability to radiation
>
> chips have gotten MUCH smaller over the years, which in part makes it more
> likely for a cell that's hit to flip, but also means that (for a given
> capability) it's much smaller, so is far less likely to be hit.
>
> probability based systems don't require every calculation to be perfect
>
> "AI" systems need to be validated anyway (since their behavior can't be
> predicted), so if there are too many errors, it just will fail validation
>
> with enough processing capacity, you can re-run the calculations and compare
> resuts.
>
>
> One thing I haven't seen people talk about is that space-based systems
> are NOT
> going to be massive, coherent clusters the way current AI training
> clusters are.
> They will be many smaller clusters with relatively low bandwidth/high
> latency
> communications between them (you can't send data faster than the speed
> of
> light). The first posts about space datacenters were dense, massive
> things
> (comparable in size to ground based systems) with solar panels and
> radiators
> measured in square miles. Elon and SpaceX are talking about many small
> satellites in the 100Kw range, similar in size to the starlink
> satellites that
> Starship can deploy.
>
>
>
> I fully expect that new training algorithms will be found that will
> drastically
> improve the efficiency, but I also expect that when they are found,
> those
> companies with lots of hardware and expertise in running it will be
> able to make
> better use of the new algorithms, if only to train more models doing
> different
> things at the same time. It still favors those companies that get ahead
> (and
> don't collapse in the process)
>
> every bubble over-builds infrastructure, as a lot of people who lose
> their
> shirts jump on board the new fad without being able to evaluate the
> companies.
> But those companies that fail generally get bought out by others,
> cheap, and the
> infrastructure that is built gets used by someone else with a more
> realistic
> business model. It may take years (see the massive overbuilding of
> fiber in some
> areas), but it will eventually be used.
>
> I think there is disagreement on if AI is going to 'hockey stick' or not, but
> even if it doesn't, thee are a lot of good uses for the pattern matching
> capability (just not at today's prices)
>
> David Lang
>
>
> On Wed, 25 Feb 2026, Nick Matthews wrote:
>
>> Date: Wed, 25 Feb 2026 19:38:52 -0700
>> From: Nick Matthews <matthnick@gmail.com>
>> To: David Lang <david@lang.hm>
>> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
>> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] Re: Data centers are racing to space — and regulation
>> can’t keep up
>>
>> The underlying theory here is if someone builds a model that can improve
>> itself faster than humans, they win. Military, economy, future problems,
>> etc. That could have a lot of real on-Earth impact. There's investments and
>> races going on that support that theory.
>>
>> If the major limiting factor is how big and fast you can build power plants
>> on earth, and assume the person with the most access to power wins, it
>> starts to make more sense.
>>
>> However, there's also a giant list of technical assumptions that need to be
>> true for those assumptions to fly (get it?). And those technical
>> assumptions don't necessarily need to be true in order to cash the checks
>> from people that either want to compete in that race or invest in someone
>> that is.
>>
>> Some of the assumptions I've come to include:
>> * Adding more power and data to models eventually gets you to the
>> intelligence needed to hockey stick. (Versus solving this problem with a
>> different approach, algorithms, or different kinds of data.)
>> * The models, data, and underlying algorithms aren't easily replicated by
>> others once they start exponentially increasing in ability. E.g. can
>> someone like Deepseek just take the outputs of the first mover, and then
>> not require the same power capacity and replicate a similar value. This
>> would slow down the first mover velocity benefits.
>> * AI eventually starts creating returns.
>> * Launch costs go down significantly (x10?)
>> * There's enough room in sun synchronous orbit to run at power scales not
>> possible on earth without kicking off Kessler
>> * A combination of very large solar panels, radiative cooling, fluid
>> exchange between them, the computing, propulsion, and any necessary
>> redundancy of these components is still economical.
>> * Operational loss due to radiation, micro asteroids, and general
>> component failure is tolerable.
>> * Components like GPUs and RAM and underlying bus structures can be built
>> to be more radiation tolerant.
>> * Burning up new orders of magnitude of amounts of elements in the
>> atmosphere can be managed (aluminum, silicon, etc.)
>> * Or, there is some amount of in-orbit recycling and manufacturing without
>> returning material back to Earth.
>> * Bandwidth can be built for 1) intra cluster within a satellite, 2) cross
>> cluster via OISl, 3) Back to Earth using RF or lasers.
>> * Regulatory bodies agree with the risk versus reward and approve this
>> kind of plan.
>> * The smarter-than-human AI doesn't decide to destroy the human race in a
>> move of self preservation because the AI companies didn't have time for
>> boundaries.
>>
>> I think it's a neat thought experiment, even if it's a little terrifying in
>> scale and impact if it's remotely possible.
>>
>> -nick
>>
>> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
>> starlink@lists.bufferbloat.net> wrote:
>>
>>> Daniel AJ Sokolov wrote:
>>>
>>>> Block spots in orbit
>>>
>>> at the scale that he operates, everyone else combined in in the noise.
>>> Starlink
>>> is already several times the number of other satellites in orbit combined.
>>>
>>> besides, in the long run, he's talking about launching from the moon into
>>> solar
>>> orbit, not earth orbit, but even if he was just talking about launching
>>> into
>>> earth orbit near the moon's orbit, it's not like there are very many
>>> satellites
>>> there to contend with.
>>>
>>>> From a technology point of view, this is bonkers.
>>>
>>> if you only look at technical details, you may be right, but if you add
>>> the
>>> regulatory burden and delays in building traditional datacenters, that may
>>> be
>>> enough to change the math.
>>>
>>> Now, if we could ease the regulations so that it's easier to build power
>>> plants
>>> and hook up to the grid (or get small next-gen nuclear power plants
>>> operational
>>> so they can be dropped at the datacenters), that could change the math
>>> back.
>>>
>>> David Lang
>>> _______________________________________________
>>> Starlink mailing list -- starlink@lists.bufferbloat.net
>>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>>>
>>
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 11:54 ` Mark Handley
@ 2026-02-26 13:36 ` Vint Cerf
2026-02-26 13:56 ` Nitinder Mohan
2026-02-26 14:14 ` [Starlink] Re: Data centers are racing to space — and regulation can’t keep up Mark Handley
2026-02-26 18:01 ` David Lang
1 sibling, 2 replies; 29+ messages in thread
From: Vint Cerf @ 2026-02-26 13:36 UTC (permalink / raw)
To: Mark Handley
Cc: David Lang, Nick Matthews, Daniel AJ Sokolov,
Dave Taht via Starlink
maybe this has already been addressed but things in orbit disappear over
the horizon. If you rely on store/forward relay to keep in touch with an
inferencing engine while in orbit, you will experience variable latency. I
don't see how you could easily instantiate the same inferencing on the next
data center to come into view. Seems to me this is a very different
computing and communication environment than ground-based data centers on
the terrestrial Internet.
What am I missing?
v
On Thu, Feb 26, 2026 at 6:55 AM Mark Handley via Starlink <
starlink@lists.bufferbloat.net> wrote:
> AI datacenters effectively split into training and inference, and it's
> worth optimizing for one or the other. For training, you want as much
> compute in one low latency cluster as possible. A single GB200 rack (72
> GPUs) is currently around 160kW, and a leading edge training cluster is now
> well north of 100,000 GPUs. A single VR200 rack will be ~250kW later this
> year. For inference, you typically neeed between a few and a few hundred
> GPUs (or equivalents - TPUs, Celebras, etc) interconnected.
>
> It's easy to see inference clusters, especially wafer-scale like Celebras,
> being capable of being put in orbit. But the problem there is low(ish)
> latency to customers is also a requirement, so that constrains the orbits
> you could use. And it's actually not hard to get terrestrial power if you
> can scatter large numbers of smaller inference clusters worldwide, which is
> what we do.
>
> It's really hard to see training clusters in orbit, not only for cooling
> reasons, but also because they have very high failure rates and require a
> lot of human maintenance. In our current supercomputers we're looking at
> more than a million optical links in one building, so there is a continuous
> rate of link failure and replacement. We are continuously replacing
> switches and GPU nodes. Now you can design for resilience, and we are
> already running a new network design that does this. But living on the
> leading edge of what's possible in compute and having low failure rates
> tend to be mutually incompatible. We are working with all our suppliers to
> reduce failure rates in the next-but-one generation by design - we'd love
> that because a non-trivial cause of failure is a technician fixing one
> fault and causing another. So there's a lot of hope to improve things, but
> there's nothing coming down the pipeline that would allow large training
> clusters to have leading edge performance and simultaneously run unmanned.
>
> Mark (currently doing supercomputer networks at OpenAI)
>
> On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
> > a couple comments in response (specifically applying to SpaceX
> >
> > they are working to reduce launch costs between 10x and 100x
> >
> > they are not just looking at sun synchronous orbit, but also at
> launching from
> > the moon into a moon-size earth orbit or solar orbit
> >
> > re: chip vulnerability to radiation
> >
> > chips have gotten MUCH smaller over the years, which in part makes it
> more
> > likely for a cell that's hit to flip, but also means that (for a given
> > capability) it's much smaller, so is far less likely to be hit.
> >
> > probability based systems don't require every calculation to be perfect
> >
> > "AI" systems need to be validated anyway (since their behavior can't be
> > predicted), so if there are too many errors, it just will fail validation
> >
> > with enough processing capacity, you can re-run the calculations and
> compare
> > resuts.
> >
> >
> > One thing I haven't seen people talk about is that space-based systems
> > are NOT
> > going to be massive, coherent clusters the way current AI training
> > clusters are.
> > They will be many smaller clusters with relatively low bandwidth/high
> > latency
> > communications between them (you can't send data faster than the speed
> > of
> > light). The first posts about space datacenters were dense, massive
> > things
> > (comparable in size to ground based systems) with solar panels and
> > radiators
> > measured in square miles. Elon and SpaceX are talking about many small
> > satellites in the 100Kw range, similar in size to the starlink
> > satellites that
> > Starship can deploy.
> >
> >
> >
> > I fully expect that new training algorithms will be found that will
> > drastically
> > improve the efficiency, but I also expect that when they are found,
> > those
> > companies with lots of hardware and expertise in running it will be
> > able to make
> > better use of the new algorithms, if only to train more models doing
> > different
> > things at the same time. It still favors those companies that get ahead
> > (and
> > don't collapse in the process)
> >
> > every bubble over-builds infrastructure, as a lot of people who lose
> > their
> > shirts jump on board the new fad without being able to evaluate the
> > companies.
> > But those companies that fail generally get bought out by others,
> > cheap, and the
> > infrastructure that is built gets used by someone else with a more
> > realistic
> > business model. It may take years (see the massive overbuilding of
> > fiber in some
> > areas), but it will eventually be used.
> >
> > I think there is disagreement on if AI is going to 'hockey stick' or
> not, but
> > even if it doesn't, thee are a lot of good uses for the pattern matching
> > capability (just not at today's prices)
> >
> > David Lang
> >
> >
> > On Wed, 25 Feb 2026, Nick Matthews wrote:
> >
> >> Date: Wed, 25 Feb 2026 19:38:52 -0700
> >> From: Nick Matthews <matthnick@gmail.com>
> >> To: David Lang <david@lang.hm>
> >> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
> >> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> >> Subject: Re: [Starlink] Re: Data centers are racing to space — and
> regulation
> >> can’t keep up
> >>
> >> The underlying theory here is if someone builds a model that can improve
> >> itself faster than humans, they win. Military, economy, future problems,
> >> etc. That could have a lot of real on-Earth impact. There's investments
> and
> >> races going on that support that theory.
> >>
> >> If the major limiting factor is how big and fast you can build power
> plants
> >> on earth, and assume the person with the most access to power wins, it
> >> starts to make more sense.
> >>
> >> However, there's also a giant list of technical assumptions that need
> to be
> >> true for those assumptions to fly (get it?). And those technical
> >> assumptions don't necessarily need to be true in order to cash the
> checks
> >> from people that either want to compete in that race or invest in
> someone
> >> that is.
> >>
> >> Some of the assumptions I've come to include:
> >> * Adding more power and data to models eventually gets you to the
> >> intelligence needed to hockey stick. (Versus solving this problem with a
> >> different approach, algorithms, or different kinds of data.)
> >> * The models, data, and underlying algorithms aren't easily replicated
> by
> >> others once they start exponentially increasing in ability. E.g. can
> >> someone like Deepseek just take the outputs of the first mover, and then
> >> not require the same power capacity and replicate a similar value. This
> >> would slow down the first mover velocity benefits.
> >> * AI eventually starts creating returns.
> >> * Launch costs go down significantly (x10?)
> >> * There's enough room in sun synchronous orbit to run at power scales
> not
> >> possible on earth without kicking off Kessler
> >> * A combination of very large solar panels, radiative cooling, fluid
> >> exchange between them, the computing, propulsion, and any necessary
> >> redundancy of these components is still economical.
> >> * Operational loss due to radiation, micro asteroids, and general
> >> component failure is tolerable.
> >> * Components like GPUs and RAM and underlying bus structures can be
> built
> >> to be more radiation tolerant.
> >> * Burning up new orders of magnitude of amounts of elements in the
> >> atmosphere can be managed (aluminum, silicon, etc.)
> >> * Or, there is some amount of in-orbit recycling and manufacturing
> without
> >> returning material back to Earth.
> >> * Bandwidth can be built for 1) intra cluster within a satellite, 2)
> cross
> >> cluster via OISl, 3) Back to Earth using RF or lasers.
> >> * Regulatory bodies agree with the risk versus reward and approve this
> >> kind of plan.
> >> * The smarter-than-human AI doesn't decide to destroy the human race in
> a
> >> move of self preservation because the AI companies didn't have time for
> >> boundaries.
> >>
> >> I think it's a neat thought experiment, even if it's a little
> terrifying in
> >> scale and impact if it's remotely possible.
> >>
> >> -nick
> >>
> >> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
> >> starlink@lists.bufferbloat.net> wrote:
> >>
> >>> Daniel AJ Sokolov wrote:
> >>>
> >>>> Block spots in orbit
> >>>
> >>> at the scale that he operates, everyone else combined in in the noise.
> >>> Starlink
> >>> is already several times the number of other satellites in orbit
> combined.
> >>>
> >>> besides, in the long run, he's talking about launching from the moon
> into
> >>> solar
> >>> orbit, not earth orbit, but even if he was just talking about launching
> >>> into
> >>> earth orbit near the moon's orbit, it's not like there are very many
> >>> satellites
> >>> there to contend with.
> >>>
> >>>> From a technology point of view, this is bonkers.
> >>>
> >>> if you only look at technical details, you may be right, but if you add
> >>> the
> >>> regulatory burden and delays in building traditional datacenters, that
> may
> >>> be
> >>> enough to change the math.
> >>>
> >>> Now, if we could ease the regulations so that it's easier to build
> power
> >>> plants
> >>> and hook up to the grid (or get small next-gen nuclear power plants
> >>> operational
> >>> so they can be dropped at the datacenters), that could change the math
> >>> back.
> >>>
> >>> David Lang
> >>> _______________________________________________
> >>> Starlink mailing list -- starlink@lists.bufferbloat.net
> >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> >>>
> >>
> > _______________________________________________
> > Starlink mailing list -- starlink@lists.bufferbloat.net
> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
--
Please send any postal/overnight deliveries to:
Vint Cerf
Google, LLC
1900 Reston Metro Plaza, 16th Floor
Reston, VA 20190
+1 (571) 213 1346
until further notice
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 13:36 ` Vint Cerf
@ 2026-02-26 13:56 ` Nitinder Mohan
2026-02-26 21:36 ` Ulrich Speidel
2026-02-26 14:14 ` [Starlink] Re: Data centers are racing to space — and regulation can’t keep up Mark Handley
1 sibling, 1 reply; 29+ messages in thread
From: Nitinder Mohan @ 2026-02-26 13:56 UTC (permalink / raw)
To: Vint Cerf, Mark Handley
Cc: David Lang, Nick Matthews, Daniel AJ Sokolov,
Dave Taht via Starlink
This has been a great discussion so far! Thanks all!
We recently held a Dagstuhl seminar (26062, "Connected Space")<https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/26062> bringing together researchers from academia, industry, and space agencies to work through exactly these questions. We are still preparing the report and I will summarize the key findings in TheNetworkingChannel panel<https://networkingchannel.eu/connected-space-challenges-and-opportunities-in-satellite-computing-and-networking/> <https://networkingchannel.eu/connected-space-challenges-and-opportunities-in-satellite-computing-and-networking/> in a few weeks (register to catch that). In the meantime, let me share a few insights on the discussions here.
1.
The downlink bottleneck is the real motivation for space computing, not replacing ground data centers. Satellites with high-fidelity sensors collect on the order of terabytes per orbit but can transmit only tens of gigabytes per ground pass. For deep space missions the situation is far worse, with Mars orbiters returning roughly 1 percent of captured data. This data disparity makes onboard computation an engineering necessity for filtering and prioritizing what gets sent down. In LEO, the benefits are limited but we converged that it can be beneficial to do AI-in-space for space-generated data (see point 2 and 5).
2.
Orbital data centers serving Earth-based users face fundamental physics constraints, exactly as this thread has identified. Heat dissipation is the hardest problem. Satellites can only radiate heat, and available radiator surface area is strictly limited. Unlike ground facilities with active cooling, there is a hard thermodynamic ceiling on how much computation any individual satellite can sustain. The seminar reached strong consensus that the "data center in space" concept for general Earth-centric workloads is not validated, and the sustainability math does not currently work out. Published analysis presented at the seminar showed that the CO2 footprint of launching computing hardware to orbit via current-generation rockets far exceeds that of operating equivalent terrestrial facilities.
3.
The inference vs. training distinction raised in this thread maps precisely to what we found. Training clusters are essentially ruled out for space. They require massive low-latency interconnects, continuous human maintenance for hardware failures, and power densities incompatible with orbital constraints. Inference is more plausible in principle given smaller cluster sizes, but the latency and handover problems raised in this thread are real and unsolved. As objects in orbit disappear over the horizon, maintaining session continuity with an inference engine requires either store-and-forward (introducing variable, potentially large latency) or instantiating equivalent state on the next satellite coming into view, which is an open distributed systems problem with no proven solution at scale.
4.
The appropriate model is distributed, not centralized. Rather than attempting to replicate terrestrial-scale data centers in orbit, the seminar converged on distributed computing across constellations as the viable paradigm. Many smaller satellites each performing focused preprocessing, filtering, and classification, then coordinating results. This distributes thermal loads and power requirements while matching the physical reality of orbital mechanics.
5.
Lightweight, purpose-built AI is what works in space, not LLMs. The seminar found clear consensus that large language models and heavy transformer architectures are inappropriate for orbital deployment given power, thermal, and radiation constraints. What does work are custom convolutional neural networks optimized for specific tasks (cloud detection, anomaly identification, object tracking) that can run within tight power and time budgets. There is one can use the LEO- or MEO- based AI data centers for processing remote sensing generated data because they are very large hyperspectral images and these satellites have limited point contact and transfer times with the Earth. Of course, this brings up the question of inter-constellation connectivity, which itself is an interesting research direction.
6.
The COTS hardware shift is real but comes with tradeoffs. The space industry is moving away from radiation-hardened legacy processors toward commercial off-the-shelf components with appropriate fault tolerance and shielding. This dramatically improves available compute performance. But as noted in this thread, radiation effects on modern small-geometry chips are a genuine concern, and the approach works only when you have enough redundancy across a constellation to tolerate individual failures.
7.
The points about regulation and launch costs cutting both ways are well taken. The seminar also spent significant time on policy and sustainability. Concerns were raised about the prospect of massive constellations deployed for AI purposes, including debris risks, atmospheric effects from re-entry, and whether the space industry is repeating historical patterns of overbuilding driven by competition rather than validated demand. The sustainability question remains genuinely open and needs rigorous full-lifecycle accounting that does not yet exist.
For anyone interested, I wrote a short summary of the seminar findings here: https://spearlab.nl/news/2026-02-10-dagstuhl-connected-space-seminar
Thanks and Regards,
Nitinder Mohan
Assistant Professor
Head of SPEAR Lab, Networked Systems Group
TU Delft, Netherlands
Personal website: https://www.nitindermohan.com/
Lab website: https://spearlab.nl/
From: Vint Cerf via Starlink <starlink@lists.bufferbloat.net>
Date: Thursday, 26 February 2026 at 14:37
To: Mark Handley <mark@handley.org.uk>
Cc: David Lang <david@lang.hm>, Nick Matthews <matthnick@gmail.com>, Daniel AJ Sokolov <daniel@falco.ca>, Dave Taht via Starlink <starlink@lists.bufferbloat.net>
Subject: [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
maybe this has already been addressed but things in orbit disappear over
the horizon. If you rely on store/forward relay to keep in touch with an
inferencing engine while in orbit, you will experience variable latency. I
don't see how you could easily instantiate the same inferencing on the next
data center to come into view. Seems to me this is a very different
computing and communication environment than ground-based data centers on
the terrestrial Internet.
What am I missing?
v
On Thu, Feb 26, 2026 at 6:55 AM Mark Handley via Starlink <
starlink@lists.bufferbloat.net> wrote:
> AI datacenters effectively split into training and inference, and it's
> worth optimizing for one or the other. For training, you want as much
> compute in one low latency cluster as possible. A single GB200 rack (72
> GPUs) is currently around 160kW, and a leading edge training cluster is now
> well north of 100,000 GPUs. A single VR200 rack will be ~250kW later this
> year. For inference, you typically neeed between a few and a few hundred
> GPUs (or equivalents - TPUs, Celebras, etc) interconnected.
>
> It's easy to see inference clusters, especially wafer-scale like Celebras,
> being capable of being put in orbit. But the problem there is low(ish)
> latency to customers is also a requirement, so that constrains the orbits
> you could use. And it's actually not hard to get terrestrial power if you
> can scatter large numbers of smaller inference clusters worldwide, which is
> what we do.
>
> It's really hard to see training clusters in orbit, not only for cooling
> reasons, but also because they have very high failure rates and require a
> lot of human maintenance. In our current supercomputers we're looking at
> more than a million optical links in one building, so there is a continuous
> rate of link failure and replacement. We are continuously replacing
> switches and GPU nodes. Now you can design for resilience, and we are
> already running a new network design that does this. But living on the
> leading edge of what's possible in compute and having low failure rates
> tend to be mutually incompatible. We are working with all our suppliers to
> reduce failure rates in the next-but-one generation by design - we'd love
> that because a non-trivial cause of failure is a technician fixing one
> fault and causing another. So there's a lot of hope to improve things, but
> there's nothing coming down the pipeline that would allow large training
> clusters to have leading edge performance and simultaneously run unmanned.
>
> Mark (currently doing supercomputer networks at OpenAI)
>
> On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
> > a couple comments in response (specifically applying to SpaceX
> >
> > they are working to reduce launch costs between 10x and 100x
> >
> > they are not just looking at sun synchronous orbit, but also at
> launching from
> > the moon into a moon-size earth orbit or solar orbit
> >
> > re: chip vulnerability to radiation
> >
> > chips have gotten MUCH smaller over the years, which in part makes it
> more
> > likely for a cell that's hit to flip, but also means that (for a given
> > capability) it's much smaller, so is far less likely to be hit.
> >
> > probability based systems don't require every calculation to be perfect
> >
> > "AI" systems need to be validated anyway (since their behavior can't be
> > predicted), so if there are too many errors, it just will fail validation
> >
> > with enough processing capacity, you can re-run the calculations and
> compare
> > resuts.
> >
> >
> > One thing I haven't seen people talk about is that space-based systems
> > are NOT
> > going to be massive, coherent clusters the way current AI training
> > clusters are.
> > They will be many smaller clusters with relatively low bandwidth/high
> > latency
> > communications between them (you can't send data faster than the speed
> > of
> > light). The first posts about space datacenters were dense, massive
> > things
> > (comparable in size to ground based systems) with solar panels and
> > radiators
> > measured in square miles. Elon and SpaceX are talking about many small
> > satellites in the 100Kw range, similar in size to the starlink
> > satellites that
> > Starship can deploy.
> >
> >
> >
> > I fully expect that new training algorithms will be found that will
> > drastically
> > improve the efficiency, but I also expect that when they are found,
> > those
> > companies with lots of hardware and expertise in running it will be
> > able to make
> > better use of the new algorithms, if only to train more models doing
> > different
> > things at the same time. It still favors those companies that get ahead
> > (and
> > don't collapse in the process)
> >
> > every bubble over-builds infrastructure, as a lot of people who lose
> > their
> > shirts jump on board the new fad without being able to evaluate the
> > companies.
> > But those companies that fail generally get bought out by others,
> > cheap, and the
> > infrastructure that is built gets used by someone else with a more
> > realistic
> > business model. It may take years (see the massive overbuilding of
> > fiber in some
> > areas), but it will eventually be used.
> >
> > I think there is disagreement on if AI is going to 'hockey stick' or
> not, but
> > even if it doesn't, thee are a lot of good uses for the pattern matching
> > capability (just not at today's prices)
> >
> > David Lang
> >
> >
> > On Wed, 25 Feb 2026, Nick Matthews wrote:
> >
> >> Date: Wed, 25 Feb 2026 19:38:52 -0700
> >> From: Nick Matthews <matthnick@gmail.com>
> >> To: David Lang <david@lang.hm>
> >> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
> >> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
> >> Subject: Re: [Starlink] Re: Data centers are racing to space — and
> regulation
> >> can’t keep up
> >>
> >> The underlying theory here is if someone builds a model that can improve
> >> itself faster than humans, they win. Military, economy, future problems,
> >> etc. That could have a lot of real on-Earth impact. There's investments
> and
> >> races going on that support that theory.
> >>
> >> If the major limiting factor is how big and fast you can build power
> plants
> >> on earth, and assume the person with the most access to power wins, it
> >> starts to make more sense.
> >>
> >> However, there's also a giant list of technical assumptions that need
> to be
> >> true for those assumptions to fly (get it?). And those technical
> >> assumptions don't necessarily need to be true in order to cash the
> checks
> >> from people that either want to compete in that race or invest in
> someone
> >> that is.
> >>
> >> Some of the assumptions I've come to include:
> >> * Adding more power and data to models eventually gets you to the
> >> intelligence needed to hockey stick. (Versus solving this problem with a
> >> different approach, algorithms, or different kinds of data.)
> >> * The models, data, and underlying algorithms aren't easily replicated
> by
> >> others once they start exponentially increasing in ability. E.g. can
> >> someone like Deepseek just take the outputs of the first mover, and then
> >> not require the same power capacity and replicate a similar value. This
> >> would slow down the first mover velocity benefits.
> >> * AI eventually starts creating returns.
> >> * Launch costs go down significantly (x10?)
> >> * There's enough room in sun synchronous orbit to run at power scales
> not
> >> possible on earth without kicking off Kessler
> >> * A combination of very large solar panels, radiative cooling, fluid
> >> exchange between them, the computing, propulsion, and any necessary
> >> redundancy of these components is still economical.
> >> * Operational loss due to radiation, micro asteroids, and general
> >> component failure is tolerable.
> >> * Components like GPUs and RAM and underlying bus structures can be
> built
> >> to be more radiation tolerant.
> >> * Burning up new orders of magnitude of amounts of elements in the
> >> atmosphere can be managed (aluminum, silicon, etc.)
> >> * Or, there is some amount of in-orbit recycling and manufacturing
> without
> >> returning material back to Earth.
> >> * Bandwidth can be built for 1) intra cluster within a satellite, 2)
> cross
> >> cluster via OISl, 3) Back to Earth using RF or lasers.
> >> * Regulatory bodies agree with the risk versus reward and approve this
> >> kind of plan.
> >> * The smarter-than-human AI doesn't decide to destroy the human race in
> a
> >> move of self preservation because the AI companies didn't have time for
> >> boundaries.
> >>
> >> I think it's a neat thought experiment, even if it's a little
> terrifying in
> >> scale and impact if it's remotely possible.
> >>
> >> -nick
> >>
> >> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
> >> starlink@lists.bufferbloat.net> wrote:
> >>
> >>> Daniel AJ Sokolov wrote:
> >>>
> >>>> Block spots in orbit
> >>>
> >>> at the scale that he operates, everyone else combined in in the noise.
> >>> Starlink
> >>> is already several times the number of other satellites in orbit
> combined.
> >>>
> >>> besides, in the long run, he's talking about launching from the moon
> into
> >>> solar
> >>> orbit, not earth orbit, but even if he was just talking about launching
> >>> into
> >>> earth orbit near the moon's orbit, it's not like there are very many
> >>> satellites
> >>> there to contend with.
> >>>
> >>>> From a technology point of view, this is bonkers.
> >>>
> >>> if you only look at technical details, you may be right, but if you add
> >>> the
> >>> regulatory burden and delays in building traditional datacenters, that
> may
> >>> be
> >>> enough to change the math.
> >>>
> >>> Now, if we could ease the regulations so that it's easier to build
> power
> >>> plants
> >>> and hook up to the grid (or get small next-gen nuclear power plants
> >>> operational
> >>> so they can be dropped at the datacenters), that could change the math
> >>> back.
> >>>
> >>> David Lang
> >>> _______________________________________________
> >>> Starlink mailing list -- starlink@lists.bufferbloat.net
> >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> >>>
> >>
> > _______________________________________________
> > Starlink mailing list -- starlink@lists.bufferbloat.net
> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
--
Please send any postal/overnight deliveries to:
Vint Cerf
Google, LLC
1900 Reston Metro Plaza, 16th Floor
Reston, VA 20190
+1 (571) 213 1346
until further notice
_______________________________________________
Starlink mailing list -- starlink@lists.bufferbloat.net
To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 13:36 ` Vint Cerf
2026-02-26 13:56 ` Nitinder Mohan
@ 2026-02-26 14:14 ` Mark Handley
1 sibling, 0 replies; 29+ messages in thread
From: Mark Handley @ 2026-02-26 14:14 UTC (permalink / raw)
To: Vint Cerf
Cc: David Lang, Nick Matthews, Daniel AJ Sokolov,
Dave Taht via Starlink
Hi Vint,
I don't think you're missing anything - it seems hard to me. Now the latency requirements are probably not all that strict, so I would guess you can keep a user's session on one inference platform for a number of minutes, routing the traffic multihop to that platform if needed as it moves away. At some point you'll need to migrate the session cache to another closer platform. SpaceX probably have enough ISL capacity to do this, but you're likely to get a handoff glitch if you time it wrong. Likely easier for text sessions than interactive voice. Doesn't seem impossible to engineer - SpaceX have plenty of experience in this area - but definitely some downsides beyond just cooling.
Mark
On Thu, 26 Feb 2026, at 1:36 PM, Vint Cerf wrote:
> maybe this has already been addressed but things in orbit disappear over the horizon. If you rely on store/forward relay to keep in touch with an inferencing engine while in orbit, you will experience variable latency. I don't see how you could easily instantiate the same inferencing on the next data center to come into view. Seems to me this is a very different computing and communication environment than ground-based data centers on the terrestrial Internet.
>
> What am I missing?
>
> v
>
>
> On Thu, Feb 26, 2026 at 6:55 AM Mark Handley via Starlink <starlink@lists.bufferbloat.net> wrote:
>> AI datacenters effectively split into training and inference, and it's worth optimizing for one or the other. For training, you want as much compute in one low latency cluster as possible. A single GB200 rack (72 GPUs) is currently around 160kW, and a leading edge training cluster is now well north of 100,000 GPUs. A single VR200 rack will be ~250kW later this year. For inference, you typically neeed between a few and a few hundred GPUs (or equivalents - TPUs, Celebras, etc) interconnected.
>>
>> It's easy to see inference clusters, especially wafer-scale like Celebras, being capable of being put in orbit. But the problem there is low(ish) latency to customers is also a requirement, so that constrains the orbits you could use. And it's actually not hard to get terrestrial power if you can scatter large numbers of smaller inference clusters worldwide, which is what we do.
>>
>> It's really hard to see training clusters in orbit, not only for cooling reasons, but also because they have very high failure rates and require a lot of human maintenance. In our current supercomputers we're looking at more than a million optical links in one building, so there is a continuous rate of link failure and replacement. We are continuously replacing switches and GPU nodes. Now you can design for resilience, and we are already running a new network design that does this. But living on the leading edge of what's possible in compute and having low failure rates tend to be mutually incompatible. We are working with all our suppliers to reduce failure rates in the next-but-one generation by design - we'd love that because a non-trivial cause of failure is a technician fixing one fault and causing another. So there's a lot of hope to improve things, but there's nothing coming down the pipeline that would allow large training clusters to have leading edge performance and simultaneously run unmanned.
>>
>> Mark (currently doing supercomputer networks at OpenAI)
>>
>> On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
>> > a couple comments in response (specifically applying to SpaceX
>> >
>> > they are working to reduce launch costs between 10x and 100x
>> >
>> > they are not just looking at sun synchronous orbit, but also at launching from
>> > the moon into a moon-size earth orbit or solar orbit
>> >
>> > re: chip vulnerability to radiation
>> >
>> > chips have gotten MUCH smaller over the years, which in part makes it more
>> > likely for a cell that's hit to flip, but also means that (for a given
>> > capability) it's much smaller, so is far less likely to be hit.
>> >
>> > probability based systems don't require every calculation to be perfect
>> >
>> > "AI" systems need to be validated anyway (since their behavior can't be
>> > predicted), so if there are too many errors, it just will fail validation
>> >
>> > with enough processing capacity, you can re-run the calculations and compare
>> > resuts.
>> >
>> >
>> > One thing I haven't seen people talk about is that space-based systems
>> > are NOT
>> > going to be massive, coherent clusters the way current AI training
>> > clusters are.
>> > They will be many smaller clusters with relatively low bandwidth/high
>> > latency
>> > communications between them (you can't send data faster than the speed
>> > of
>> > light). The first posts about space datacenters were dense, massive
>> > things
>> > (comparable in size to ground based systems) with solar panels and
>> > radiators
>> > measured in square miles. Elon and SpaceX are talking about many small
>> > satellites in the 100Kw range, similar in size to the starlink
>> > satellites that
>> > Starship can deploy.
>> >
>> >
>> >
>> > I fully expect that new training algorithms will be found that will
>> > drastically
>> > improve the efficiency, but I also expect that when they are found,
>> > those
>> > companies with lots of hardware and expertise in running it will be
>> > able to make
>> > better use of the new algorithms, if only to train more models doing
>> > different
>> > things at the same time. It still favors those companies that get ahead
>> > (and
>> > don't collapse in the process)
>> >
>> > every bubble over-builds infrastructure, as a lot of people who lose
>> > their
>> > shirts jump on board the new fad without being able to evaluate the
>> > companies.
>> > But those companies that fail generally get bought out by others,
>> > cheap, and the
>> > infrastructure that is built gets used by someone else with a more
>> > realistic
>> > business model. It may take years (see the massive overbuilding of
>> > fiber in some
>> > areas), but it will eventually be used.
>> >
>> > I think there is disagreement on if AI is going to 'hockey stick' or not, but
>> > even if it doesn't, thee are a lot of good uses for the pattern matching
>> > capability (just not at today's prices)
>> >
>> > David Lang
>> >
>> >
>> > On Wed, 25 Feb 2026, Nick Matthews wrote:
>> >
>> >> Date: Wed, 25 Feb 2026 19:38:52 -0700
>> >> From: Nick Matthews <matthnick@gmail.com>
>> >> To: David Lang <david@lang.hm>
>> >> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
>> >> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>> >> Subject: Re: [Starlink] Re: Data centers are racing to space — and regulation
>> >> can’t keep up
>> >>
>> >> The underlying theory here is if someone builds a model that can improve
>> >> itself faster than humans, they win. Military, economy, future problems,
>> >> etc. That could have a lot of real on-Earth impact. There's investments and
>> >> races going on that support that theory.
>> >>
>> >> If the major limiting factor is how big and fast you can build power plants
>> >> on earth, and assume the person with the most access to power wins, it
>> >> starts to make more sense.
>> >>
>> >> However, there's also a giant list of technical assumptions that need to be
>> >> true for those assumptions to fly (get it?). And those technical
>> >> assumptions don't necessarily need to be true in order to cash the checks
>> >> from people that either want to compete in that race or invest in someone
>> >> that is.
>> >>
>> >> Some of the assumptions I've come to include:
>> >> * Adding more power and data to models eventually gets you to the
>> >> intelligence needed to hockey stick. (Versus solving this problem with a
>> >> different approach, algorithms, or different kinds of data.)
>> >> * The models, data, and underlying algorithms aren't easily replicated by
>> >> others once they start exponentially increasing in ability. E.g. can
>> >> someone like Deepseek just take the outputs of the first mover, and then
>> >> not require the same power capacity and replicate a similar value. This
>> >> would slow down the first mover velocity benefits.
>> >> * AI eventually starts creating returns.
>> >> * Launch costs go down significantly (x10?)
>> >> * There's enough room in sun synchronous orbit to run at power scales not
>> >> possible on earth without kicking off Kessler
>> >> * A combination of very large solar panels, radiative cooling, fluid
>> >> exchange between them, the computing, propulsion, and any necessary
>> >> redundancy of these components is still economical.
>> >> * Operational loss due to radiation, micro asteroids, and general
>> >> component failure is tolerable.
>> >> * Components like GPUs and RAM and underlying bus structures can be built
>> >> to be more radiation tolerant.
>> >> * Burning up new orders of magnitude of amounts of elements in the
>> >> atmosphere can be managed (aluminum, silicon, etc.)
>> >> * Or, there is some amount of in-orbit recycling and manufacturing without
>> >> returning material back to Earth.
>> >> * Bandwidth can be built for 1) intra cluster within a satellite, 2) cross
>> >> cluster via OISl, 3) Back to Earth using RF or lasers.
>> >> * Regulatory bodies agree with the risk versus reward and approve this
>> >> kind of plan.
>> >> * The smarter-than-human AI doesn't decide to destroy the human race in a
>> >> move of self preservation because the AI companies didn't have time for
>> >> boundaries.
>> >>
>> >> I think it's a neat thought experiment, even if it's a little terrifying in
>> >> scale and impact if it's remotely possible.
>> >>
>> >> -nick
>> >>
>> >> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
>> >> starlink@lists.bufferbloat.net> wrote:
>> >>
>> >>> Daniel AJ Sokolov wrote:
>> >>>
>> >>>> Block spots in orbit
>> >>>
>> >>> at the scale that he operates, everyone else combined in in the noise.
>> >>> Starlink
>> >>> is already several times the number of other satellites in orbit combined.
>> >>>
>> >>> besides, in the long run, he's talking about launching from the moon into
>> >>> solar
>> >>> orbit, not earth orbit, but even if he was just talking about launching
>> >>> into
>> >>> earth orbit near the moon's orbit, it's not like there are very many
>> >>> satellites
>> >>> there to contend with.
>> >>>
>> >>>> From a technology point of view, this is bonkers.
>> >>>
>> >>> if you only look at technical details, you may be right, but if you add
>> >>> the
>> >>> regulatory burden and delays in building traditional datacenters, that may
>> >>> be
>> >>> enough to change the math.
>> >>>
>> >>> Now, if we could ease the regulations so that it's easier to build power
>> >>> plants
>> >>> and hook up to the grid (or get small next-gen nuclear power plants
>> >>> operational
>> >>> so they can be dropped at the datacenters), that could change the math
>> >>> back.
>> >>>
>> >>> David Lang
>> >>> _______________________________________________
>> >>> Starlink mailing list -- starlink@lists.bufferbloat.net
>> >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>> >>>
>> >>
>> > _______________________________________________
>> > Starlink mailing list -- starlink@lists.bufferbloat.net
>> > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>> _______________________________________________
>> Starlink mailing list -- starlink@lists.bufferbloat.net
>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> Google, LLC
> 1900 Reston Metro Plaza, 16th Floor
> Reston, VA 20190
> +1 (571) 213 1346
>
>
> until further notice
>
>
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 11:54 ` Mark Handley
2026-02-26 13:36 ` Vint Cerf
@ 2026-02-26 18:01 ` David Lang
1 sibling, 0 replies; 29+ messages in thread
From: David Lang @ 2026-02-26 18:01 UTC (permalink / raw)
To: Mark Handley
Cc: David Lang, Nick Matthews, Daniel AJ Sokolov,
Dave Taht via Starlink
Yep, that's what I was alluding to in my earlier post.
However, Elon Musk also has experience in building large, coherent training
clusters, plus the Dojo project, and as he has been talking about future
training clusters, he has been saying things that are against conventional
wisdom.
like using the AI5 processors for training as well as inference
like splitting things up across many satellites rather than being a single
coherent cluster (each satellite being a fraction of a rack-equivalent)
I'm not a fanboy who says he's always right (Dojo showed that if nothing else)
but betting against him getting something done (just not on his stated
timetable) has historically not been great odds. :-)
David Lang
Mark Handley wrote:
> AI datacenters effectively split into training and inference, and it's worth
> optimizing for one or the other. For training, you want as much compute in
> one low latency cluster as possible. A single GB200 rack (72 GPUs) is
> currently around 160kW, and a leading edge training cluster is now well north
> of 100,000 GPUs. A single VR200 rack will be ~250kW later this year. For
> inference, you typically neeed between a few and a few hundred GPUs (or
> equivalents - TPUs, Celebras, etc) interconnected.
>
> It's easy to see inference clusters, especially wafer-scale like Celebras,
> being capable of being put in orbit. But the problem there is low(ish)
> latency to customers is also a requirement, so that constrains the orbits you
> could use. And it's actually not hard to get terrestrial power if you can
> scatter large numbers of smaller inference clusters worldwide, which is what
> we do.
>
> It's really hard to see training clusters in orbit, not only for cooling
> reasons, but also because they have very high failure rates and require a lot
> of human maintenance. In our current supercomputers we're looking at more
> than a million optical links in one building, so there is a continuous rate of
> link failure and replacement. We are continuously replacing switches and GPU
> nodes. Now you can design for resilience, and we are already running a new
> network design that does this. But living on the leading edge of what's
> possible in compute and having low failure rates tend to be mutually
> incompatible. We are working with all our suppliers to reduce failure rates
> in the next-but-one generation by design - we'd love that because a
> non-trivial cause of failure is a technician fixing one fault and causing
> another. So there's a lot of hope to improve things, but there's nothing
> coming down the pipeline that would allow large training clusters to have
> leading edge performance and simultaneously run unmanned.
>
> Mark (currently doing supercomputer networks at OpenAI)
>
> On Thu, 26 Feb 2026, at 4:39 AM, David Lang via Starlink wrote:
>> a couple comments in response (specifically applying to SpaceX
>>
>> they are working to reduce launch costs between 10x and 100x
>>
>> they are not just looking at sun synchronous orbit, but also at launching from
>> the moon into a moon-size earth orbit or solar orbit
>>
>> re: chip vulnerability to radiation
>>
>> chips have gotten MUCH smaller over the years, which in part makes it more
>> likely for a cell that's hit to flip, but also means that (for a given
>> capability) it's much smaller, so is far less likely to be hit.
>>
>> probability based systems don't require every calculation to be perfect
>>
>> "AI" systems need to be validated anyway (since their behavior can't be
>> predicted), so if there are too many errors, it just will fail validation
>>
>> with enough processing capacity, you can re-run the calculations and compare
>> resuts.
>>
>>
>> One thing I haven't seen people talk about is that space-based systems
>> are NOT
>> going to be massive, coherent clusters the way current AI training
>> clusters are.
>> They will be many smaller clusters with relatively low bandwidth/high
>> latency
>> communications between them (you can't send data faster than the speed
>> of
>> light). The first posts about space datacenters were dense, massive
>> things
>> (comparable in size to ground based systems) with solar panels and
>> radiators
>> measured in square miles. Elon and SpaceX are talking about many small
>> satellites in the 100Kw range, similar in size to the starlink
>> satellites that
>> Starship can deploy.
>>
>>
>>
>> I fully expect that new training algorithms will be found that will
>> drastically
>> improve the efficiency, but I also expect that when they are found,
>> those
>> companies with lots of hardware and expertise in running it will be
>> able to make
>> better use of the new algorithms, if only to train more models doing
>> different
>> things at the same time. It still favors those companies that get ahead
>> (and
>> don't collapse in the process)
>>
>> every bubble over-builds infrastructure, as a lot of people who lose
>> their
>> shirts jump on board the new fad without being able to evaluate the
>> companies.
>> But those companies that fail generally get bought out by others,
>> cheap, and the
>> infrastructure that is built gets used by someone else with a more
>> realistic
>> business model. It may take years (see the massive overbuilding of
>> fiber in some
>> areas), but it will eventually be used.
>>
>> I think there is disagreement on if AI is going to 'hockey stick' or not, but
>> even if it doesn't, thee are a lot of good uses for the pattern matching
>> capability (just not at today's prices)
>>
>> David Lang
>>
>>
>> On Wed, 25 Feb 2026, Nick Matthews wrote:
>>
>>> Date: Wed, 25 Feb 2026 19:38:52 -0700
>>> From: Nick Matthews <matthnick@gmail.com>
>>> To: David Lang <david@lang.hm>
>>> Cc: Daniel AJ Sokolov <daniel@falco.ca>,
>>> Dave Taht via Starlink <starlink@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] Re: Data centers are racing to space — and regulation
>>> can’t keep up
>>>
>>> The underlying theory here is if someone builds a model that can improve
>>> itself faster than humans, they win. Military, economy, future problems,
>>> etc. That could have a lot of real on-Earth impact. There's investments and
>>> races going on that support that theory.
>>>
>>> If the major limiting factor is how big and fast you can build power plants
>>> on earth, and assume the person with the most access to power wins, it
>>> starts to make more sense.
>>>
>>> However, there's also a giant list of technical assumptions that need to be
>>> true for those assumptions to fly (get it?). And those technical
>>> assumptions don't necessarily need to be true in order to cash the checks
>>> from people that either want to compete in that race or invest in someone
>>> that is.
>>>
>>> Some of the assumptions I've come to include:
>>> * Adding more power and data to models eventually gets you to the
>>> intelligence needed to hockey stick. (Versus solving this problem with a
>>> different approach, algorithms, or different kinds of data.)
>>> * The models, data, and underlying algorithms aren't easily replicated by
>>> others once they start exponentially increasing in ability. E.g. can
>>> someone like Deepseek just take the outputs of the first mover, and then
>>> not require the same power capacity and replicate a similar value. This
>>> would slow down the first mover velocity benefits.
>>> * AI eventually starts creating returns.
>>> * Launch costs go down significantly (x10?)
>>> * There's enough room in sun synchronous orbit to run at power scales not
>>> possible on earth without kicking off Kessler
>>> * A combination of very large solar panels, radiative cooling, fluid
>>> exchange between them, the computing, propulsion, and any necessary
>>> redundancy of these components is still economical.
>>> * Operational loss due to radiation, micro asteroids, and general
>>> component failure is tolerable.
>>> * Components like GPUs and RAM and underlying bus structures can be built
>>> to be more radiation tolerant.
>>> * Burning up new orders of magnitude of amounts of elements in the
>>> atmosphere can be managed (aluminum, silicon, etc.)
>>> * Or, there is some amount of in-orbit recycling and manufacturing without
>>> returning material back to Earth.
>>> * Bandwidth can be built for 1) intra cluster within a satellite, 2) cross
>>> cluster via OISl, 3) Back to Earth using RF or lasers.
>>> * Regulatory bodies agree with the risk versus reward and approve this
>>> kind of plan.
>>> * The smarter-than-human AI doesn't decide to destroy the human race in a
>>> move of self preservation because the AI companies didn't have time for
>>> boundaries.
>>>
>>> I think it's a neat thought experiment, even if it's a little terrifying in
>>> scale and impact if it's remotely possible.
>>>
>>> -nick
>>>
>>> On Wed, Feb 25, 2026, 6:33 PM David Lang via Starlink <
>>> starlink@lists.bufferbloat.net> wrote:
>>>
>>>> Daniel AJ Sokolov wrote:
>>>>
>>>>> Block spots in orbit
>>>>
>>>> at the scale that he operates, everyone else combined in in the noise.
>>>> Starlink
>>>> is already several times the number of other satellites in orbit combined.
>>>>
>>>> besides, in the long run, he's talking about launching from the moon into
>>>> solar
>>>> orbit, not earth orbit, but even if he was just talking about launching
>>>> into
>>>> earth orbit near the moon's orbit, it's not like there are very many
>>>> satellites
>>>> there to contend with.
>>>>
>>>>> From a technology point of view, this is bonkers.
>>>>
>>>> if you only look at technical details, you may be right, but if you add
>>>> the
>>>> regulatory burden and delays in building traditional datacenters, that may
>>>> be
>>>> enough to change the math.
>>>>
>>>> Now, if we could ease the regulations so that it's easier to build power
>>>> plants
>>>> and hook up to the grid (or get small next-gen nuclear power plants
>>>> operational
>>>> so they can be dropped at the datacenters), that could change the math
>>>> back.
>>>>
>>>> David Lang
>>>> _______________________________________________
>>>> Starlink mailing list -- starlink@lists.bufferbloat.net
>>>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>>>>
>>>
>> _______________________________________________
>> Starlink mailing list -- starlink@lists.bufferbloat.net
>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 13:56 ` Nitinder Mohan
@ 2026-02-26 21:36 ` Ulrich Speidel
2026-02-26 23:02 ` Brandon Butterworth
0 siblings, 1 reply; 29+ messages in thread
From: Ulrich Speidel @ 2026-02-26 21:36 UTC (permalink / raw)
To: starlink
On 27/02/2026 2:56 am, Nitinder Mohan via Starlink wrote:
> 1.
> The downlink bottleneck is the real motivation for space computing, not replacing ground data centers.
Now how bad is that bottleneck really when you're operating a large
constellation, in which each satellite can downlink?
Currently, there is a significant bottleneck in terms of downlinking to
end users with Dishys. Why?
1) You only really have the Ku band. Lower down is crowded, further up
there's a problem with the atmosphere between your sats and the Dishy.
2) You have the need to keep end user devices cheap (read: small), which
limits gain and phased array directionality and selectivity.
But when we're talking data centers, we're not talking downlinking to
end users with Dishys. We're talking downlinking to other
infrastructure. Unlike an end user, where there's no choice in terms of
geographical location to downlink to, infrastructure is already
geographically diverse. SpaceX have gateways all over the place in Ka
and higher bands, so in principle they can downlink to wherever the
weather is in their favour - which likely it is pretty much all the time
somewhere in their empire. ISLs help the data get there.
If we go beyond classic TCP and use, say, linear network coding for
delivery via multiple downlink paths, then this could even look elegant.
So the only issue that then remains in this respect is latency / jitter.
> 2.
> Orbital data centers serving Earth-based users face fundamental physics constraints, exactly as this thread has identified. Heat dissipation is the hardest problem. Satellites can only radiate heat, and available radiator surface area is strictly limited. Unlike ground facilities with active cooling, there is a hard thermodynamic ceiling on how much computation any individual satellite can sustain. The seminar reached strong consensus that the "data center in space" concept for general Earth-centric workloads is not validated, and the sustainability math does not currently work out.
Now that assumes conventional computation. The reason why we get all
that heat in our computers is because their logic gates spend a lot of
their time in no-man's land between 0 and 1 bits: 0 bits might be
"switch open", i.e., voltage but no current, which means no power being
dissipated, while 1 bits might mean "switch closed", with current but no
voltage, so also no power being dissipated. But *while* they're
switching, there's both current and voltage, and hence power being
dissipated to the gates' environment.
Compute in ways that either reduce the time spent switching or that use
less power (e.g., adiabatic logic) and this becomes less of an issue.
A lot of the other issues persist, though.
--
****************************************************************
Dr. Ulrich Speidel
School of Computer Science
Room 303S.594 (City Campus)
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 21:36 ` Ulrich Speidel
@ 2026-02-26 23:02 ` Brandon Butterworth
2026-02-26 23:16 ` Nitinder Mohan
0 siblings, 1 reply; 29+ messages in thread
From: Brandon Butterworth @ 2026-02-26 23:02 UTC (permalink / raw)
To: starlink
On 26/02/2026 21:36:51, "Ulrich Speidel via Starlink"
<starlink@lists.bufferbloat.net> wrote:
>But when we're talking data centers, we're not talking downlinking to end users with Dishys. We're talking downlinking to other infrastructure.
And likely the same DCs they are using for the existing downlinks,
loads of space, power and connectivity for a faster link. Lots of work
ongoing on making those links optical too.
Maybe even the same DCs that are the data source for the AI sats. An
opportunity to increase dishy downlink capacity while adding AI.
brandon
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 23:02 ` Brandon Butterworth
@ 2026-02-26 23:16 ` Nitinder Mohan
2026-02-26 23:44 ` Ulrich Speidel
0 siblings, 1 reply; 29+ messages in thread
From: Nitinder Mohan @ 2026-02-26 23:16 UTC (permalink / raw)
To: Brandon Butterworth, starlink@lists.bufferbloat.net
>But when we're talking data centers, we're not talking downlinking to end users with Dishys. We're talking downlinking to other infrastructure.
I dont fully agree with this. Unlike terrestrial DCs, which cannot connect to end-users primarily because they are installed “off-shore”, space-based DCs are likely just sats with more compute capacity and they should behave, operate and (transitively) connect with ground terminals with comm links. I don’t see any reason why they cannot connect directly with end-users (provided they both use Ku/other relevant band).
Comments for other points coming later 😊
Thanks and Regards,
Nitinder Mohan
Assistant Professor
Head of SPEAR Lab, Networked Systems Group
TU Delft, Netherlands
Personal website: https://www.nitindermohan.com/
Lab website: https://spearlab.nl/
From: Brandon Butterworth via Starlink <starlink@lists.bufferbloat.net>
Date: Friday, 27 February 2026 at 00:02
To: starlink@lists.bufferbloat.net <starlink@lists.bufferbloat.net>
Subject: [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
On 26/02/2026 21:36:51, "Ulrich Speidel via Starlink"
<starlink@lists.bufferbloat.net> wrote:
>But when we're talking data centers, we're not talking downlinking to end users with Dishys. We're talking downlinking to other infrastructure.
And likely the same DCs they are using for the existing downlinks,
loads of space, power and connectivity for a faster link. Lots of work
ongoing on making those links optical too.
Maybe even the same DCs that are the data source for the AI sats. An
opportunity to increase dishy downlink capacity while adding AI.
brandon
_______________________________________________
Starlink mailing list -- starlink@lists.bufferbloat.net
To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 23:16 ` Nitinder Mohan
@ 2026-02-26 23:44 ` Ulrich Speidel
2026-02-27 1:01 ` Joe Hamelin
0 siblings, 1 reply; 29+ messages in thread
From: Ulrich Speidel @ 2026-02-26 23:44 UTC (permalink / raw)
To: starlink
On 27/02/2026 12:16 pm, Nitinder Mohan via Starlink wrote:
>> But when we're talking data centers, we're not talking downlinking to end users with Dishys. We're talking downlinking to other infrastructure.
> I dont fully agree with this. Unlike terrestrial DCs, which cannot connect to end-users primarily because they are installed “off-shore”, space-based DCs are likely just sats with more compute capacity and they should behave, operate and (transitively) connect with ground terminals with comm links. I don’t see any reason why they cannot connect directly with end-users (provided they both use Ku/other relevant band).
I don't think we're disagreeing here. When you connect directly to
Dishys from DC, which of course you can, then of course you're subject
to the downlink bottleneck that you pointed out earlier. That bottleneck
arises because you are bound by the user's location and need to downlink
rain or shine, which nails you down in Ku band.
It's just that when you connect a space-based DC that's part of an ISL
mesh to distributed ground infrastructure, then you have a planet full
of locations to connect to, any of which is suitable as long as there is
good weather there. You have the means to link there with ISLs, and
vastly more spectrum to up- or downlink in, including unlicensed
optical. So there's heaps of choice. On the ground, you simply forward
the data via fibre to where it actually needs to go.
I guess another question here is time horizon. Are we talking next year,
in five, in ten or twenty years?
--
****************************************************************
Dr. Ulrich Speidel
School of Computer Science
Room 303S.594 (City Campus)
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-26 23:44 ` Ulrich Speidel
@ 2026-02-27 1:01 ` Joe Hamelin
2026-02-27 1:47 ` David Lang
0 siblings, 1 reply; 29+ messages in thread
From: Joe Hamelin @ 2026-02-27 1:01 UTC (permalink / raw)
Cc: starlink
I'll just say that this puts to bed the argument that data centers create
long-term local jobs.
-Joe
--
Joe Hamelin, W7COM, Portland, OR, +1 360 474 7474
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Data centers are racing to space — and regulation can’t keep up
2026-02-27 1:01 ` Joe Hamelin
@ 2026-02-27 1:47 ` David Lang
2026-02-27 14:26 ` [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list) Sascha Meinrath
0 siblings, 1 reply; 29+ messages in thread
From: David Lang @ 2026-02-27 1:47 UTC (permalink / raw)
To: joe, Joe Hamelin; +Cc: starlink
Joe Hamelin wrote:
> I'll just say that this puts to bed the argument that data centers create
> long-term local jobs.
I don't know about that (although datacenters don't produce that many jobs once
they are up and running) the whole thing about space datacenters isn't that they
are better, just that they are (claimed to be) cheaper/faster to build from
scratch, in large part due to the regulations slowing ground based datacenters.
once the ground based datacenter is built, it's going to have a much longer life
than a satellite, seeing upgrades and new technologies wheeled in that would
require new satellites to deploy.
David Lang
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list).
2026-02-27 1:47 ` David Lang
@ 2026-02-27 14:26 ` Sascha Meinrath
2026-02-27 15:07 ` [Starlink] " David Lang
2026-02-27 15:15 ` Daniel AJ Sokolov
0 siblings, 2 replies; 29+ messages in thread
From: Sascha Meinrath @ 2026-02-27 14:26 UTC (permalink / raw)
Cc: starlink
Hi everyone,
Synthesizing from what we know thus far, I think it would be helpful to have a
topline bullet list of the major issues detrimentally impacting the viability of
data centers in space.
I remain highly skeptical of the concept, given today's technological realities,
and feel that there would be utility in having a quick reference of the major
shortcomings that need to be overcome.
Here's a starting point for some of the major limitations to data centers in
space (please add, though keep bullets pithy):
1. Thermal cooling/heat dissipation
2. Radiation hardening
3. Launch costs
4. Upgrade/maintenance costs
5. Kessler syndrome/ablation cascade risks (& collision avoidance)
6. Power generation/storage
7. Latency/bandwidth
8. Risk-adjusted ROI
9. ???
.
.
.
--Sascha
--
Sascha Meinrath
Director, X-Lab
Palmer Chair in Telecommunications
Penn State University
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Why Data Centers In Space Won't Work [Yet] (A non-canonical list).
2026-02-27 14:26 ` [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list) Sascha Meinrath
@ 2026-02-27 15:07 ` David Lang
2026-02-27 15:15 ` Daniel AJ Sokolov
1 sibling, 0 replies; 29+ messages in thread
From: David Lang @ 2026-02-27 15:07 UTC (permalink / raw)
To: sascha; +Cc: starlink
one big satellite vs lots of small satellites (not a large coherent cluster like
is currently used for training)
David Lang
On Fri, 27 Feb 2026, Sascha Meinrath via Starlink wrote:
> Date: Fri, 27 Feb 2026 09:26:52 -0500
> From: Sascha Meinrath via Starlink <starlink@lists.bufferbloat.net>
> Reply-To: sascha@thexlab.org
> Cc: starlink@lists.bufferbloat.net
> Subject: [Starlink] Why Data Centers In Space Won't Work [Yet] (A
> non-canonical list).
>
> Hi everyone,
>
> Synthesizing from what we know thus far, I think it would be helpful to have
> a topline bullet list of the major issues detrimentally impacting the
> viability of data centers in space.
>
> I remain highly skeptical of the concept, given today's technological
> realities, and feel that there would be utility in having a quick reference
> of the major shortcomings that need to be overcome.
>
> Here's a starting point for some of the major limitations to data centers in
> space (please add, though keep bullets pithy):
>
> 1. Thermal cooling/heat dissipation
> 2. Radiation hardening
> 3. Launch costs
> 4. Upgrade/maintenance costs
> 5. Kessler syndrome/ablation cascade risks (& collision avoidance)
> 6. Power generation/storage
> 7. Latency/bandwidth
> 8. Risk-adjusted ROI
> 9. ???
> .
> .
> .
>
> --Sascha
>
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Why Data Centers In Space Won't Work [Yet] (A non-canonical list).
2026-02-27 14:26 ` [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list) Sascha Meinrath
2026-02-27 15:07 ` [Starlink] " David Lang
@ 2026-02-27 15:15 ` Daniel AJ Sokolov
2026-02-27 15:22 ` Gert Doering
1 sibling, 1 reply; 29+ messages in thread
From: Daniel AJ Sokolov @ 2026-02-27 15:15 UTC (permalink / raw)
To: starlink
On 2/27/26 at 15:26, Sascha Meinrath via Starlink wrote:
> Hi everyone,
>
> Synthesizing from what we know thus far, I think it would be helpful to
> have a topline bullet list of the major issues detrimentally impacting
> the viability of data centers in space.
>
> I remain highly skeptical of the concept, given today's technological
> realities, and feel that there would be utility in having a quick
> reference of the major shortcomings that need to be overcome.
>
> Here's a starting point for some of the major limitations to data
> centers in space (please add, though keep bullets pithy):
>
> 1. Thermal cooling/heat dissipation
> 2. Radiation hardening
> 3. Launch costs
> 4. Upgrade/maintenance costs
> 5. Kessler syndrome/ablation cascade risks (& collision avoidance)
> 6. Power generation/storage
> 7. Latency/bandwidth
> 8. Risk-adjusted ROI
> 9. ???
Any satellite in LEO is in view for a few minutes only.
New security threats (jamming, physical attacks, can't replace
components that have security flaws)
There will also be resistance from concerned citizens, bussinesses, and
governments due to factors including, but not limited to:
Disposal (damage to the atmosphere of a million satellites burns up there)
Negative effects on astronomy
A million satellites will make it harder to launch rockets, I presume.
Unclear jurisdiction/limited enforcement capability
BR
Daniel AJ
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Starlink] Re: Why Data Centers In Space Won't Work [Yet] (A non-canonical list).
2026-02-27 15:15 ` Daniel AJ Sokolov
@ 2026-02-27 15:22 ` Gert Doering
0 siblings, 0 replies; 29+ messages in thread
From: Gert Doering @ 2026-02-27 15:22 UTC (permalink / raw)
To: Daniel AJ Sokolov; +Cc: starlink
Hi,
On Fri, Feb 27, 2026 at 04:15:03PM +0100, Daniel AJ Sokolov via Starlink wrote:
> Unclear jurisdiction/limited enforcement capability
Which sounds just like why Mr. M would want to do this.
Gert Doering
-- NetMaster
--
have you enabled IPv6 on something today...?
SpaceNet AG Vorstand: Sebastian v. Bomhard,
Karin Schuler, Sebastian Cler
Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: Dr. Frank Thiäner
D-80807 Muenchen HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2026-02-27 15:23 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-25 14:05 [Starlink] Data centers are racing to space — and regulation can’t keep up Hesham ElBakoury
2026-02-25 14:30 ` [Starlink] " David Collier-Brown
2026-02-25 14:32 ` [Starlink] Re: Data centers are racing to space ??? and regulation can???t " Gert Doering
2026-02-25 14:42 ` [Starlink] Re: Data centers are racing to space — and regulation can’t " Hesham ElBakoury
2026-02-26 4:28 ` J Pan
[not found] ` <CAFvDQ9p68AFJ5cQTpyx=HkA2Cf6r1m6F3ssaJh-OJK4kqK=PDQ@mail.gmail.com>
2026-02-26 5:54 ` J Pan
2026-02-26 6:01 ` Hesham ElBakoury
2026-02-25 14:50 ` Daniel AJ Sokolov
2026-02-26 1:33 ` David Lang
2026-02-26 2:38 ` Nick Matthews
2026-02-26 4:39 ` David Lang
2026-02-26 11:54 ` Mark Handley
2026-02-26 13:36 ` Vint Cerf
2026-02-26 13:56 ` Nitinder Mohan
2026-02-26 21:36 ` Ulrich Speidel
2026-02-26 23:02 ` Brandon Butterworth
2026-02-26 23:16 ` Nitinder Mohan
2026-02-26 23:44 ` Ulrich Speidel
2026-02-27 1:01 ` Joe Hamelin
2026-02-27 1:47 ` David Lang
2026-02-27 14:26 ` [Starlink] Why Data Centers In Space Won't Work [Yet] (A non-canonical list) Sascha Meinrath
2026-02-27 15:07 ` [Starlink] " David Lang
2026-02-27 15:15 ` Daniel AJ Sokolov
2026-02-27 15:22 ` Gert Doering
2026-02-26 14:14 ` [Starlink] Re: Data centers are racing to space — and regulation can’t keep up Mark Handley
2026-02-26 18:01 ` David Lang
2026-02-25 20:26 ` Brandon Butterworth
2026-02-26 1:28 ` David Lang
2026-02-26 4:49 ` David Lang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox