A lot of detail on the RF side, and you raise some valid points! A few clarifications:
- Ka band is used exclusively for gateway links, and both satellite and gateway use parabollic antennas, where sidelobes etc. are greatly reduced compared to an ESA.
- Ku band is used exclusively for service links to terminals, and from FCC filings, we know that given Nco = 1, the constellation will not project two overlapping co-frequency beams. How much they extend this overlap “safety zone” away from the 3dB contour is not known, but could be calculated given enough information about the terminal.
As per some specific points:
But that's just the width at which the beam drops to half its EIRP, not the width at which it can no longer interfere. For that, you need the 38 dB width - or thereabouts - if you can get it, and this will be significantly more than the 1.2 degrees or so of 3dB beam width.
You are correct in that the interference will come from an extended footprint, but how much the extended frequency affects the terminal is also a function of receive antenna selectivity, angle of arrival, receiver gain, etc. The Starlink terminal is not an omnidirectional antenna in receive, it is also selective by forming a receive beam with significantly more gain in a specific direction, thus increasing the SNR of the wanted signal. It would be interesting to dig into this one deeper, and see the effect on frequency re-use, that’s for sure!
That's orders of magnitude more than the re-use spatial separation you can achieve in ground-based cellular networks
You are comparing an infrastructure that has evenly distributed “towers”, to a cellular network that can adjust density of towers by reducing output power and placing more of them closer together, forming smaller and smaller cells - which comes at a cost. I believe it’s unfair to compare any satellite constellation to a cellular network in these terms.
We really don't know the beam patterns that we get from the birds and from the Dishys, and without these it's difficult to say how much angular separation a ground station needs between two satellites using the same frequency in order to receive one but not be interfered with by the other.
Oh but we do know the beam patterns - they are in the GXT files that accompany the Schedule S in FCC filings. I took them and created this view:
The three colors are 2dB, interpolated 3dB, and 4dB contours. I use these to calculate beam spread in the capacity simulation.
Basically, there are just too many variables in this for me to be overly optimistic that re-use by two different sources within a Starlink cell is possible.
We know from gRPC data from the terminal itself that there is a primary beam, and a backup beam, and we know they come from different satellites. Re-use with the same frequency is not possible, as they would be violating Nco = 1, so that point is moot.
On Aug 31, 2022, 15:41 +0200, Ulrich Speidel via Starlink <starlink@lists.bufferbloat.net>, wrote:
Um, yes, but I think we're mixing a few things up here (trying to bundle
responses here, so that's not just to you, David).
In lieu of a reliable Starlink link budget, I'm going by this one:
https://www.linkedin.com/pulse/quick-analysis-starlink-link-budget-potential-emf-david-witkowski/
Parameters here are a little outdated but the critical one is the EIRP
at the transmitter of up to ~97 dBm. Say we're looking at a 30 GHz Ka
band signal over a 600 km path, which is more reflective of the current
constellation. Then Friis propagation gives us a path loss of about 178
dB, and if we pretend for a moment that Dishy is actually a 60 cm
diameter parabolic dish, we're looking at around 45 dBi receive antenna
gain. Probably a little less as Dishy isn't actually a dish.
Then that gives us 97 dBm - 178 dB + 45 dB = -36 dBm at the ground
receiver. Now I'm assuming here that this is for ALL user downlink beams
from the satellite combined. What we don't really know is how many
parallel signals a satellite multiplexes into these, but assuming at the
moment a receive frontend bandwidth of about 100 MHz, noise power at the
receiver should be around 38 pW or -74 dBm. That leaves Starlink around
38 dB of SNR to play with. Shannon lets us send up to just over 1.25
Gb/s in that kind of channel, but then again that's just the Shannon
limit, and in practice, we'll be looking a a wee bit less.
That SNR also gives us an indication as to the signal separation Dishy
needs to achieve from the beams from another satellite in order for that
other satellite to re-use the same frequency. Note that this is
significantly more than just the 3 dB that the 3 dB width of a beam
gives us. The 3 dB width is what is commonly quoted as "beam width", and
that's where you get those nice narrow angles. But that's just the width
at which the beam drops to half its EIRP, not the width at which it can
no longer interfere. For that, you need the 38 dB width - or thereabouts
- if you can get it, and this will be significantly more than the 1.2
degrees or so of 3dB beam width.
But even if you worked with 1.2 degrees at a distance of 600 km and you
assumed that sort of beam width at the satellite, it still gives you an
12 km radius on the ground within which you cannot reuse the downlink
frequency from the same satellite. That's orders of magnitude more than
the re-use spatial separation you can achieve in ground-based cellular
networks. Note that the 0.1 deg beam "precision" is irrelevant here -
that just tells me the increments in which they can point the beam, but
not how wide it is and how intensity falls off with angle, or how bad
the side lobes are.
Whether you can re-use the same frequency from another satellite to the
same ground area is a good question. We really don't know the beam
patterns that we get from the birds and from the Dishys, and without
these it's difficult to say how much angular separation a ground station
needs between two satellites using the same frequency in order to
receive one but not be interfered with by the other. Basically, there
are just too many variables in this for me to be overly optimistic that
re-use by two different sources within a Starlink cell is possible. And
I haven't even looked at the numbers for Ku band here.
CDNs & Co - are NOT just dumb economic optimisations to lower bit miles.
They actually improve performance, and significantly so. A lower RTT
between you and a server that you grab data from via TCP allows a much
faster opening of the congestion window. With initial TCP cwnd's being
typically 10 packets or around 15 kB of data, having a server within 10
ms of your client means that you've transferred 15 kB after 5 ms, 45 kB
after 10 ms, 105 kB after 15 ms, 225 kB after 20 ms, and 465 kB after 25
ms. Make your RTT 100 ms, and it takes half a second to get to your 465
kB. Having a CDN server in close topological proximity also generally
reduces the number of queues between you and the server at which packets
can die an untimely early death, and generally, by taking load off such
links, reduces the probability of this happening at a lot of queues.
Bottom line: Having a CDN keeps your users happier. Also, live streaming
and video conferencing aside, most video is not multicast or broadcast,
but unicast.
DNS on Starlink satellites: Good idea, lightweight, and I'd suspect
maybe already in operation? It's low hanging fruit. CDNs on satellites:
In the day and age of SSDs, having capacity on the satellite shouldn't
really be an issue, although robustness may be. But heat in this sort of
storage gets generated mostly when data is written, so it's a function
of what percentage of your data that reaches the bird is going to end up
in cache. Generally, on a LEO satellite that'll have to cache baseball
videos while over the US, videos in a dozen different languages while
over Europe, Bollywood clips while over India, cooking shows while over
Australia and always the same old ads while over New Zealand, all the
while not getting a lot of cache hits for stuff it put into cache 15
minutes ago, would probably have to write a lot. Moreover, as you'd be
reliant on the content you want being on the satellite that you are
currently talking to, pretty much all satellites in the constellation
would need to cache all content. In other words: If I watch a cat video
now and thereby put it into the cache of the bird overhead, and then
send you an e-mail and you're in my neighbourhood and you watch it half
an hour later, my satellite would be on the other side of the world, and
you'd have to have it re-uploaded to the CDN on the bird that's flying
overhead our neighbourhood then. Not as efficient as a ground-based CDN
on our ground-based network that's fed via a satellite link.
As long as Starlink is going to have in the order of hundreds of
thousands of direct users, that problem won't go away.
On 31/08/2022 7:33 pm, David Lang wrote:
On Wed, 31 Aug 2022, Ulrich Speidel via Starlink wrote:
This combines with the uncomfortable truth that an RF "beam" from a
satellite isn't as selective as a laser beam, so the options for
frequency re-use from orbit aren't anywhere near as good as from a
mobile base station across the road: Any beam pointed at you can be
heard for many miles around and therefore no other user can re-use
that frequency (with the same burst slot etc.).
not quite, you are forgetting that the antennas on the ground are also
steerable arrays and so they can focus their 'receiving beam' at
different satellites. This is less efficient than a transmitting beam
as the satellites you aren't 'pointed' at will increase your noise
floor, but it does allow the same frequency to be used for multiple
satellites into the same area at the same time.
David Lang
--
****************************************************************
Dr. Ulrich Speidel
School of Computer Science
Room 303S.594 (City Campus)
The University of Auckland
u.speidel@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink