[Starlink] mike's starlink 2.0 sim is up (Dave Taht)

Mike Puchol mike at starlink.sx
Mon Oct 3 18:51:56 EDT 2022


Hi David,

Let me try to address your concerns inline below (apologies for an image-heavy post, I thought they'd be useful):
> 1. the system being modeled here does not appear to be based on facts about the actual implementation of the link and "satellite" switching protocols. Instead, it is based on calculating RF bandwidth availability, getting a *maximum achievable rate*. It's analogous to looking at 802.11ax or 802.11ac systems and assuming that the "channel" used by the access point can be fully utilized using whatever OFDM modulation scheme is being used. That is, it ignores what we in Ethernet land call the MAC protocol.
Since I don't know what the actual implementation is, I started with what we know: the capabilities of the satellite (FCC filings, public statements, etc), the capabilities of the gateway (also FCC filings, plus info disclosed by a regulator), and the problem that needs solving, in terms of scale (how many cells fall under a satellite). A few napkin calculations later, it was obvious that assigning one of the spot beams to a single cell for at least 15 seconds (the resource scheduler's update cadence, also known).

The next step was to figure out what logical, and possible from engineering and basic physics points of view, methods could be employed to split a satellite's resources to cover more cells. I came up with two, TDM (I'm tempted to coin "Intercell Time Domain Multiplexing (ITDM)"), and beam spread, both of which I describe in the Medium post.

The maximum stated uplink capacity of a gateway, from the regulatory filings, is between 25.6 Gbps "typical" at nadir, and 32 Gbps "very optimistic" for 64QAM. If we take the information we know from the service link side, the maximum downlink speeds that have been observed are ~700 Mbps, in the early days of Starlink. On a 240 MHz channel, almost 3 bps/Hz is not unreasonable. The full 48 spot beams would allow for a maximum 33.6 Gbps in downlink, which would not be supported by the gateway's 32 Gbps "optimistic", never mind the "typical". Even so, I settled on 20 Gbps as the actual capacity, based on several articles and public statements as to the actual capacity of each satellite.

To your point, I start with a conservative assumption. My model takes each satellite, and after all possible beams have been allocated, calculates the capacity per beam as the greater of (20 Gbps / Number of beams used), or 700 Mbps. On a fully utilized satellite, each beam would provide ~417 Mbps, which is a reduction from the value used in other studies. Until more information is known, I took this to be the fairest assumption.

There is one issue which Dave Taht alluded to - TDM "waste", where each beam switch in the cell/time domain requires a time to sync, adjust for doppler, de-bloating buffers, etc. - so the more a beam is split in TDM, the lower the per-cell capacity would be. This is not factored in my simulation yet, because I don't know the switching time in dowlink. That doesn't mean I'm not looking!

Here is a snapshot of the uplink from a Starlink user terminal, the 62.5 MHz channel has 1.25 MHz guard bands, and 60 MHz effective bandwidth:



If we zoom into the spectrum, we can see that the subcarrier spacing is ~468 kHz, thus, we clearly have OFDM with 128 subcarriers.



In the time domain, the transmit time is 1.1 ms against a 7.8 ms frame, resulting, almost magically, in the 14% transmit duty cycle found in FCC radiation hazard reports for UT2, the one I have:



Further to this, I captured channel changes, such as this one:



Zooming in again, we find some interesting values - a 12.4 ms final frame, then a 52.6 ms switch time, and a 22.0 ms initial frame:



Worst case, we lose 87 ms per 15 second allocation, or 0.58% of capacity. Best case, counting only the switch time, it would be 0.35%. The tests were done with an upload-only TCP iPerf3, giving an average of 25 Mbps. The switching loss, assuming there is a switch every 15 seconds (there isn't, sometimes the beam is tasked for over a minute), would be ~14.5 kbps.

I'm starting to feel the urge to turn this into another Medium post, as I could now go down several rabbit holes to determine if the dwell time is fully utilized by other uplink signals, or there is waste there too.
> 2, There's an assumption that the dishys are "full duplex" - that is, that they can transmit and receive at the same time. While I don't know for sure, I'm pretty certain that the current dishy arrays that people have dissected cannot transmit and receive at the same time. This is for the same reason that the MIMO arrays on 802.11 stations cannot, in practice, transmit and receive at the same time. The satellite altitude imposes a significant power difference between sent and received signals.
The current consumer-grade UTs are not full-duplex, so that was never my assumption. The issue is not power difference between signals, that is taken care of by FDD (uplink is 14 GHz, downlink 12 GHz). Manufacturing a full-duplex ESA on a single PCB is hard and expensive, because you need separate TX and RX paths capable of operating 100% of the time. Starlink has tested full duplex by combining separate TX and RX antennas onto a single device, but the smart approach for the consumer-grade equipment was to go half duplex and simplify. The satellites are full-duplex capable as they have separate TX and RX antennas.
> To me this is confirmed by all the bufferbloat observations being made in the field.
> The problem with the kind of analysis that Mike Puchol is doing is that it assumes "wire speed" transmission without considering the problem of managing end-to-end latency at all.
> Some of you have heard me call this the "hot rodder attitude" to performance. Yes, you can design a car that only accelerates at full acceleration for a quarter mile. But that same car is worse than terrible for ordinary driving needs - it cannot stop, it cannot maintain speed easily, ...
> What we have if we look at frequency-bandwidth based simulations is a *terrible* network for carrying Internet traffic. Completely irrelevant.
> This Starlink constellation is used in a packet-switching mode. Lowest possible end-to-end latency is required, even more than throughput. When 100 stations want to send a packet through the satellite to the "cell" ground station, they can't all send at once. They want latency for their voice frame (if they are using Zoom) to be under 100 msec. from the PC to the Zoom headend and back to another PC. Voice frames are typically very small numbers of bits (whatever is needed to represent 10-30 msec. of high quality audio.
> If the packets can't do that 100 msec. round trip, quality is degraded to useless.
Taking the above measurements, if we follow the Starlink 75/25 split ratio in other areas (Ku antennas on the satellites, for example), our 14% uplink duty cycle could turn into 42% downlink, with the rest e.g. used for other cells. If we can achieve 30 Mbps on 60 MHz uplink at 14%, our 42% with 240 MHz in downlink would be ~360 Mbps. We have seen higher than this, so my math is likely off, plus there are caveats (the 30 Mbps could be using only a portion of OFDM subcarriers, and we don't know symbol rates or modulations, for example). If they use the same spacing, the downlink could have 512 subcarriers.

On the latency side, with 128 OFDM subcarriers you could have 128 terminals transmitting with equal latency to the POP - though very little throughput, indeed.

To your second point, and to summarize, the simulation I have cobbled together with pieces of wet string and bad code is an approximation. I have provided all the caveats around it, so someone with more knowledge than I (this mailing list is full to the brim with them!) can take it further, suggest dampening factors I could add, such as latency, etc.

Hope all the above was useful!

Best,

Mike
On Oct 3, 2022, 19:09 +0200, David P. Reed via Starlink <starlink at lists.bufferbloat.net>, wrote:
>
> I read through all of Mike Puchol's materials. Quite elaborate, and interesting theoretical work.
> However, I have to say I am quite disappointed for two reasons:
> 1. the system being modeled here does not appear to be based on facts about the actual implementation of the link and "satellite" switching protocols. Instead, it is based on calculating RF bandwidth availability, getting a *maximum achievable rate*. It's analogous to looking at 802.11ax or 802.11ac systems and assuming that the "channel" used by the access point can be fully utilized using whatever OFDM modulation scheme is being used. That is, it ignores what we in Ethernet land call the MAC protocol.
> 2, There's an assumption that the dishys are "full duplex" - that is, that they can transmit and receive at the same time. While I don't know for sure, I'm pretty certain that the current dishy arrays that people have dissected cannot transmit and receive at the same time. This is for the same reason that the MIMO arrays on 802.11 stations cannot, in practice, transmit and receive at the same time. The satellite altitude imposes a significant power difference between sent and received signals.
> To me this is confirmed by all the bufferbloat observations being made in the field.
> The problem with the kind of analysis that Mike Puchol is doing is that it assumes "wire speed" transmission without considering the problem of managing end-to-end latency at all.
> Some of you have heard me call this the "hot rodder attitude" to performance. Yes, you can design a car that only accelerates at full acceleration for a quarter mile. But that same car is worse than terrible for ordinary driving needs - it cannot stop, it cannot maintain speed easily, ...What we have if we look at frequency-bandwidth based simulations is a *terrible* network for carrying Internet traffic. Completely irrelevant.
> This Starlink constellation is used in a packet-switching mode. Lowest possible end-to-end latency is required, even more than throughput. When 100 stations want to send a packet through the satellite to the "cell" ground station, they can't all send at once. They want latency for their voice frame (if they are using Zoom) to be under 100 msec. from the PC to the Zoom headend and back to another PC. Voice frames are typically very small numbers of bits (whatever is needed to represent 10-30 msec. of high quality audio.
> If the packets can't do that 100 msec. round trip, quality is degraded to useless.
> _______________________________________________
> Starlink mailing list
> Starlink at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20221004/66158a5f/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Uplink_width.PNG
Type: image/png
Size: 684449 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20221004/66158a5f/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Subcarrier_width.PNG
Type: image/png
Size: 32020 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20221004/66158a5f/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: TX_duty_cycle.PNG
Type: image/png
Size: 42018 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20221004/66158a5f/attachment-0007.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Channel change.png
Type: image/png
Size: 1194679 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20221004/66158a5f/attachment-0008.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Change duration.png
Type: image/png
Size: 70576 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20221004/66158a5f/attachment-0009.png>


More information about the Starlink mailing list