[Starlink] dynamically adjusting cake to starlink

Mike Puchol mike at starlink.sx
Wed Jun 9 08:16:50 EDT 2021


Greetings Dave,

I have just been introduced into the world of bufferbloat - something I’ve had to deal with in my WISP, but never had a proper name for. I’m an aerospace engineer by origin, so please forgive my quite likely blunders in this space!

As for the topic at hand, I understand the concept - in essence, if we can find what the Dishy - satellite - gateway triumvirate is doing in terms of capacity and buffering, we can apply that information into our own router’s buffer/queue strategy. I have taken a quick look at the document, and there is one assumption that is wrong, AFAIK, which is that the satellites are “bent pipes” - they actually process packets, which means they are an active component of buffering and queuing in the path (for optical links to work, satellites would necessarily have to process packets).

At any given time, a satellite may have ~10k cells under its footprint that it can serve, and GSO exclusions can exclude between 15% at high latitudes and 25% near the Equator. It has three arrays for downlinks and one array for uplinks, with 16 beams per array. Thus, it can train up to 48 beams towards the ground, and receive from up to 16 locations on the ground at any given time. Thus, to cover each cell, there is a physical TDM element, plus an additional multiplexing element for each terminal in each cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the Ka links from satellite to gateway, each spot beam is capable of delivering ~416 Mbps through each spot beam. If we assume symmetrical performance, the 16 uplink beams from the one panel would “take in” ~416 Mbps per beam.

Thus, a satellite is effectively limited to a 75/25 duty cycle between downlink and uplink, as it can paint each cell three times for download, for every time it receives from it.

We then get to the killer - how many cells can a satellite effectively serve? If we take 8000 cells as an average, each downlink spot beam would need to care for ~167 cells, and in uplink ~500 cells. Focusing on uplink, each cell gets ~2ms of “beam time” to transmit, which is ridiculously low. From my tracker simulations, certain areas are served by around 8 satellites at any given time, so catering for satellite spacing, we could assume 10ms of “beam time”. Still to low, as this would only allow for ~170 packets.

My current estimate from back of the envelope calculations is that Starlink uses cell TDM factors per beam of 1 to 5, depending on served customer density under the footprint. Thus, worst case, a satellite is giving a cell ~200ms of downlink time and ~67ms of uplink time. A Dishy terminal will need to buffer traffic while it gets its turn on a spot beam.

Does this make sense, also from observations?

Best,

Mike
On Jun 9, 2021, 11:12 +0200, Dave Taht <davet at teklibre.net>, wrote:
> Dear Mike:
>
> The biggest thing we need is something that can pull the right stats off the dishy and dynamically adjust cake (on outbound at least) to have the right amount of buffering for the available bandwidth, so as to make for better statistical multiplexing (FQ) and active queue management (AQM)
>
> It’s pretty simple: in mangled shell script syntax:
>
> while up, down = getstats()
> do
> tc qdisc change dev eth0 root cake bandwidth $up
> tc qdisc change dev ifb0 root cake bandwidth $down
> done
>
> Which any router directly in front of the dishy can do (which is what we’ve been doing)
>
> But whatever magic “getstats()” would need to do is unclear from the stats we get out of it, and a better alternative would be for the dishy itself and their headends to be doing this with “BQL" backpressure.
>
> As for the huge reductions of latency and jitter under working load, and a vast improvement in QoE - for what we’ve been able to achieve thus far, see appendix A here:
>
> https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing
>
> We’ve got plenty more data
> on uploads and downloads and other forms of traffic (starlink is optimizing for ping, only, over ipv6. Sigh)…
>
> … and a meeting with some starlink execs at 11AM today.
>
> I’m pretty sure at this point we will be able to make a massive improvement in starlink’s network design very quickly, after that meeting.
>
> > On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan at nathan.io> wrote:
> >
> > I invited Mike, the creator of the site (starlink.sx) to join the list - he’s put a crazy amount of work in to figure out which sats are active (with advice from Jonathan McDowell), programming GSO exclusion bands, etc. His dayjob is in the ISP business.
> >
> > > On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic at onholyground.com> wrote:
> > > > https://starlink.sx if you have’t seen it yet. You can locate yourself, and it will make some educated guesses about which satellite to which ground station you’re using. Interesting to see the birds change and the links move between ground stations, lots going on to make these things work.
> > > > _______________________________________________
> > > > Starlink mailing list
> > > > Starlink at lists.bufferbloat.net
> > > > https://lists.bufferbloat.net/listinfo/starlink
> > _______________________________________________
> > Starlink mailing list
> > Starlink at lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20210609/6c158a71/attachment.html>


More information about the Starlink mailing list