* [Starlink] pretty cool starlink visualizer
@ 2021-06-06 3:31 Darrell Budic
2021-06-06 4:26 ` David Lang
2021-06-08 21:54 ` Nathan Owens
0 siblings, 2 replies; 17+ messages in thread
From: Darrell Budic @ 2021-06-06 3:31 UTC (permalink / raw)
To: starlink
[-- Attachment #1: Type: text/plain, Size: 318 bytes --]
https://starlink.sx <https://starlink.sx/> if you have’t seen it yet. You can locate yourself, and it will make some educated guesses about which satellite to which ground station you’re using. Interesting to see the birds change and the links move between ground stations, lots going on to make these things work.
[-- Attachment #2: Type: text/html, Size: 546 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] pretty cool starlink visualizer
2021-06-06 3:31 [Starlink] pretty cool starlink visualizer Darrell Budic
@ 2021-06-06 4:26 ` David Lang
2021-06-08 21:54 ` Nathan Owens
1 sibling, 0 replies; 17+ messages in thread
From: David Lang @ 2021-06-06 4:26 UTC (permalink / raw)
To: Darrell Budic; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 613 bytes --]
take a look at https://satellitemap.space/ also covers oneweb
David Lang
On Sat, 5 Jun 2021, Darrell Budic wrote:
> Date: Sat, 5 Jun 2021 22:31:11 -0500
> From: Darrell Budic <budic@onholyground.com>
> To: starlink@lists.bufferbloat.net
> Subject: [Starlink] pretty cool starlink visualizer
>
> https://starlink.sx <https://starlink.sx/> if you have’t seen it yet. You can locate yourself, and it will make some educated guesses about which satellite to which ground station you’re using. Interesting to see the birds change and the links move between ground stations, lots going on to make these things work.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] pretty cool starlink visualizer
2021-06-06 3:31 [Starlink] pretty cool starlink visualizer Darrell Budic
2021-06-06 4:26 ` David Lang
@ 2021-06-08 21:54 ` Nathan Owens
2021-06-09 9:12 ` [Starlink] dynamically adjusting cake to starlink Dave Taht
1 sibling, 1 reply; 17+ messages in thread
From: Nathan Owens @ 2021-06-08 21:54 UTC (permalink / raw)
To: Darrell Budic; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 815 bytes --]
I invited Mike, the creator of the site (starlink.sx) to join the list -
he’s put a crazy amount of work in to figure out which sats are active
(with advice from Jonathan McDowell), programming GSO exclusion bands, etc.
His dayjob is in the ISP business.
On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com> wrote:
> https://starlink.sx if you have’t seen it yet. You can locate yourself,
> and it will make some educated guesses about which satellite to which
> ground station you’re using. Interesting to see the birds change and the
> links move between ground stations, lots going on to make these things work.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
[-- Attachment #2: Type: text/html, Size: 1429 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* [Starlink] dynamically adjusting cake to starlink
2021-06-08 21:54 ` Nathan Owens
@ 2021-06-09 9:12 ` Dave Taht
2021-06-09 10:20 ` Mikael Abrahamsson
` (2 more replies)
0 siblings, 3 replies; 17+ messages in thread
From: Dave Taht @ 2021-06-09 9:12 UTC (permalink / raw)
To: Nathan Owens; +Cc: Darrell Budic, starlink
[-- Attachment #1: Type: text/plain, Size: 2656 bytes --]
Dear Mike:
The biggest thing we need is something that can pull the right stats off the dishy and dynamically adjust cake (on outbound at least) to have the right amount of buffering for the available bandwidth, so as to make for better statistical multiplexing (FQ) and active queue management (AQM)
It’s pretty simple: in mangled shell script syntax:
while up, down = getstats()
do
tc qdisc change dev eth0 root cake bandwidth $up
tc qdisc change dev ifb0 root cake bandwidth $down
done
Which any router directly in front of the dishy can do (which is what we’ve been doing)
But whatever magic “getstats()” would need to do is unclear from the stats we get out of it, and a better alternative would be for the dishy itself and their headends to be doing this with “BQL" backpressure.
As for the huge reductions of latency and jitter under working load, and a vast improvement in QoE - for what we’ve been able to achieve thus far, see appendix A here:
https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing
We’ve got plenty more data
on uploads and downloads and other forms of traffic (starlink is optimizing for ping, only, over ipv6. Sigh)…
… and a meeting with some starlink execs at 11AM today.
I’m pretty sure at this point we will be able to make a massive improvement in starlink’s network design very quickly, after that meeting.
> On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan@nathan.io> wrote:
>
> I invited Mike, the creator of the site (starlink.sx <http://starlink.sx/>) to join the list - he’s put a crazy amount of work in to figure out which sats are active (with advice from Jonathan McDowell), programming GSO exclusion bands, etc. His dayjob is in the ISP business.
>
> On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> wrote:
> https://starlink.sx <https://starlink.sx/> if you have’t seen it yet. You can locate yourself, and it will make some educated guesses about which satellite to which ground station you’re using. Interesting to see the birds change and the links move between ground stations, lots going on to make these things work.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 4496 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 9:12 ` [Starlink] dynamically adjusting cake to starlink Dave Taht
@ 2021-06-09 10:20 ` Mikael Abrahamsson
2021-06-09 16:39 ` Michael Richardson
2021-06-09 12:09 ` Nathan Owens
2021-06-09 12:16 ` Mike Puchol
2 siblings, 1 reply; 17+ messages in thread
From: Mikael Abrahamsson @ 2021-06-09 10:20 UTC (permalink / raw)
To: Dave Taht; +Cc: Nathan Owens, starlink
[-- Attachment #1: Type: text/plain, Size: 953 bytes --]
On Wed, 9 Jun 2021, Dave Taht wrote:
> … and a meeting with some starlink execs at 11AM today.
Nice!
The solution space you're working on ("one device knows its next-hop speed
and adjacent device needs to know this") is applicable to for instance
cable, DSL, PON, wifi etc.
I seem to remember there has been work in BBF to have the BNG know the
sync-up speed of DSL in order to do proper buffering, what you need to do
is to come from "the other end".
Would be nice if there was a generic mechanism for this but since several
of these devices use L2 I think it'd have to be something LLDP like (also
on L2).
Dave, how often does information regarding rate/scheduler need to be
distributed from the scheduling node to the node that is trying to not use
the upstream buffer? I presume this is in the 0.1 to 1s range, because the
scheduler might change quite frequently and substantially?
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 9:12 ` [Starlink] dynamically adjusting cake to starlink Dave Taht
2021-06-09 10:20 ` Mikael Abrahamsson
@ 2021-06-09 12:09 ` Nathan Owens
2021-06-09 12:16 ` Mike Puchol
2 siblings, 0 replies; 17+ messages in thread
From: Nathan Owens @ 2021-06-09 12:09 UTC (permalink / raw)
To: Dave Taht; +Cc: Darrell Budic, starlink
[-- Attachment #1: Type: text/plain, Size: 2944 bytes --]
This one’s probably not a Mike question since the site doesn’t pull
anything from dishy. The dish represents all of its stats via gRPC - there
are several clients available. I can probably code this up next week if
someone doesn’t beat me to it.
—Nathan
On Wed, Jun 9, 2021 at 2:12 AM Dave Taht <davet@teklibre.net> wrote:
> Dear Mike:
>
> The biggest thing we need is something that can pull the right stats off
> the dishy and dynamically adjust cake (on outbound at least) to have the
> right amount of buffering for the available bandwidth, so as to make for
> better statistical multiplexing (FQ) and active queue management (AQM)
>
> It’s pretty simple: in mangled shell script syntax:
>
> while up, down = getstats()
> do
> tc qdisc change dev eth0 root cake bandwidth $up
> tc qdisc change dev ifb0 root cake bandwidth $down
> done
>
> Which any router directly in front of the dishy can do (which is what
> we’ve been doing)
>
> But whatever magic “getstats()” would need to do is unclear from the stats
> we get out of it, and a better alternative would be for the dishy itself
> and their headends to be doing this with “BQL" backpressure.
>
> As for the huge reductions of latency and jitter under working load, and a
> vast improvement in QoE - for what we’ve been able to achieve thus far, see
> appendix A here:
>
>
> https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing
>
> We’ve got plenty more data
> on uploads and downloads and other forms of traffic (starlink is
> optimizing for ping, only, over ipv6. Sigh)…
>
> … and a meeting with some starlink execs at 11AM today.
>
> I’m pretty sure at this point we will be able to make a massive
> improvement in starlink’s network design very quickly, after that meeting.
>
> On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan@nathan.io> wrote:
>
> I invited Mike, the creator of the site (starlink.sx) to join the list -
> he’s put a crazy amount of work in to figure out which sats are active
> (with advice from Jonathan McDowell), programming GSO exclusion bands, etc.
> His dayjob is in the ISP business.
>
> On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com>
> wrote:
>
>> https://starlink.sx if you have’t seen it yet. You can locate yourself,
>> and it will make some educated guesses about which satellite to which
>> ground station you’re using. Interesting to see the birds change and the
>> links move between ground stations, lots going on to make these things work.
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
>
[-- Attachment #2: Type: text/html, Size: 4600 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 9:12 ` [Starlink] dynamically adjusting cake to starlink Dave Taht
2021-06-09 10:20 ` Mikael Abrahamsson
2021-06-09 12:09 ` Nathan Owens
@ 2021-06-09 12:16 ` Mike Puchol
2021-06-09 13:21 ` Dave Taht
` (2 more replies)
2 siblings, 3 replies; 17+ messages in thread
From: Mike Puchol @ 2021-06-09 12:16 UTC (permalink / raw)
To: Nathan Owens, Dave Taht; +Cc: starlink
[-- Attachment #1: Type: text/plain, Size: 5609 bytes --]
Greetings Dave,
I have just been introduced into the world of bufferbloat - something I’ve had to deal with in my WISP, but never had a proper name for. I’m an aerospace engineer by origin, so please forgive my quite likely blunders in this space!
As for the topic at hand, I understand the concept - in essence, if we can find what the Dishy - satellite - gateway triumvirate is doing in terms of capacity and buffering, we can apply that information into our own router’s buffer/queue strategy. I have taken a quick look at the document, and there is one assumption that is wrong, AFAIK, which is that the satellites are “bent pipes” - they actually process packets, which means they are an active component of buffering and queuing in the path (for optical links to work, satellites would necessarily have to process packets).
At any given time, a satellite may have ~10k cells under its footprint that it can serve, and GSO exclusions can exclude between 15% at high latitudes and 25% near the Equator. It has three arrays for downlinks and one array for uplinks, with 16 beams per array. Thus, it can train up to 48 beams towards the ground, and receive from up to 16 locations on the ground at any given time. Thus, to cover each cell, there is a physical TDM element, plus an additional multiplexing element for each terminal in each cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the Ka links from satellite to gateway, each spot beam is capable of delivering ~416 Mbps through each spot beam. If we assume symmetrical performance, the 16 uplink beams from the one panel would “take in” ~416 Mbps per beam.
Thus, a satellite is effectively limited to a 75/25 duty cycle between downlink and uplink, as it can paint each cell three times for download, for every time it receives from it.
We then get to the killer - how many cells can a satellite effectively serve? If we take 8000 cells as an average, each downlink spot beam would need to care for ~167 cells, and in uplink ~500 cells. Focusing on uplink, each cell gets ~2ms of “beam time” to transmit, which is ridiculously low. From my tracker simulations, certain areas are served by around 8 satellites at any given time, so catering for satellite spacing, we could assume 10ms of “beam time”. Still to low, as this would only allow for ~170 packets.
My current estimate from back of the envelope calculations is that Starlink uses cell TDM factors per beam of 1 to 5, depending on served customer density under the footprint. Thus, worst case, a satellite is giving a cell ~200ms of downlink time and ~67ms of uplink time. A Dishy terminal will need to buffer traffic while it gets its turn on a spot beam.
Does this make sense, also from observations?
Best,
Mike
On Jun 9, 2021, 11:12 +0200, Dave Taht <davet@teklibre.net>, wrote:
> Dear Mike:
>
> The biggest thing we need is something that can pull the right stats off the dishy and dynamically adjust cake (on outbound at least) to have the right amount of buffering for the available bandwidth, so as to make for better statistical multiplexing (FQ) and active queue management (AQM)
>
> It’s pretty simple: in mangled shell script syntax:
>
> while up, down = getstats()
> do
> tc qdisc change dev eth0 root cake bandwidth $up
> tc qdisc change dev ifb0 root cake bandwidth $down
> done
>
> Which any router directly in front of the dishy can do (which is what we’ve been doing)
>
> But whatever magic “getstats()” would need to do is unclear from the stats we get out of it, and a better alternative would be for the dishy itself and their headends to be doing this with “BQL" backpressure.
>
> As for the huge reductions of latency and jitter under working load, and a vast improvement in QoE - for what we’ve been able to achieve thus far, see appendix A here:
>
> https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing
>
> We’ve got plenty more data
> on uploads and downloads and other forms of traffic (starlink is optimizing for ping, only, over ipv6. Sigh)…
>
> … and a meeting with some starlink execs at 11AM today.
>
> I’m pretty sure at this point we will be able to make a massive improvement in starlink’s network design very quickly, after that meeting.
>
> > On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan@nathan.io> wrote:
> >
> > I invited Mike, the creator of the site (starlink.sx) to join the list - he’s put a crazy amount of work in to figure out which sats are active (with advice from Jonathan McDowell), programming GSO exclusion bands, etc. His dayjob is in the ISP business.
> >
> > > On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com> wrote:
> > > > https://starlink.sx if you have’t seen it yet. You can locate yourself, and it will make some educated guesses about which satellite to which ground station you’re using. Interesting to see the birds change and the links move between ground stations, lots going on to make these things work.
> > > > _______________________________________________
> > > > Starlink mailing list
> > > > Starlink@lists.bufferbloat.net
> > > > https://lists.bufferbloat.net/listinfo/starlink
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 8046 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 12:16 ` Mike Puchol
@ 2021-06-09 13:21 ` Dave Taht
2021-06-09 14:12 ` Michael Richardson
2021-06-09 15:32 ` Nathan Owens
2 siblings, 0 replies; 17+ messages in thread
From: Dave Taht @ 2021-06-09 13:21 UTC (permalink / raw)
To: Mike Puchol; +Cc: Nathan Owens, starlink
[-- Attachment #1: Type: text/plain, Size: 6593 bytes --]
This was a fun talk. When I gave it I thought… "If I could only get *one* MAJOR ISP to update their routers and headends…"
https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-but-its-not-over-yet/
More below:
> On Jun 9, 2021, at 5:16 AM, Mike Puchol <mike@starlink.sx> wrote:
>
> Greetings Dave,
>
> I have just been introduced into the world of bufferbloat - something I’ve had to deal with in my WISP, but never had a proper name for. I’m an aerospace engineer by origin, so please forgive my quite likely blunders in this space!
>
Not just you, by a long shot.
> As for the topic at hand, I understand the concept - in essence, if we can find what the Dishy - satellite - gateway triumvirate is doing in terms of capacity and buffering, we can apply that information into our own router’s buffer/queue strategy. I have taken a quick look at the document, and there is one assumption that is wrong, AFAIK, which is that the satellites are “bent pipes” - they actually process packets, which means they are an active component of buffering and queuing in the path (for optical links to work, satellites would necessarily have to process packets).
Both my other satcom experts were making the assumption it was actually a bent pipe at this point. I had thought it did packets, simply because I didn’t
grok how they could possibly globally optimize the world solution any other way.
>
> At any given time, a satellite may have ~10k cells under its footprint that it can serve, and GSO exclusions can exclude between 15% at high latitudes and 25% near the Equator. It has three arrays for downlinks and one array for uplinks, with 16 beams per array. Thus, it can train up to 48 beams towards the ground, and receive from up to 16 locations on the ground at any given time. Thus, to cover each cell, there is a physical TDM element, plus an additional multiplexing element for each terminal in each cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the Ka links from satellite to gateway, each spot beam is capable of delivering ~416 Mbps through each spot beam. If we assume symmetrical performance, the 16 uplink beams from the one panel would “take in” ~416 Mbps per beam.
>
> Thus, a satellite is effectively limited to a 75/25 duty cycle between downlink and uplink, as it can paint each cell three times for download, for every time it receives from it.
>
> We then get to the killer - how many cells can a satellite effectively serve? If we take 8000 cells as an average, each downlink spot beam would need to care for ~167 cells, and in uplink ~500 cells. Focusing on uplink, each cell gets ~2ms of “beam time” to transmit, which is ridiculously low. From my tracker simulations, certain areas are served by around 8 satellites at any given time, so catering for satellite spacing, we could assume 10ms of “beam time”. Still to low, as this would only allow for ~170 packets.
>
> My current estimate from back of the envelope calculations is that Starlink uses cell TDM factors per beam of 1 to 5, depending on served customer density under the footprint. Thus, worst case, a satellite is giving a cell ~200ms of downlink time and ~67ms of uplink time. A Dishy terminal will need to buffer traffic while it gets its turn on a spot beam.
>
> Does this make sense, also from observations?
>
> Best,
>
> Mike
> On Jun 9, 2021, 11:12 +0200, Dave Taht <davet@teklibre.net>, wrote:
>> Dear Mike:
>>
>> The biggest thing we need is something that can pull the right stats off the dishy and dynamically adjust cake (on outbound at least) to have the right amount of buffering for the available bandwidth, so as to make for better statistical multiplexing (FQ) and active queue management (AQM)
>>
>> It’s pretty simple: in mangled shell script syntax:
>>
>> while up, down = getstats()
>> do
>> tc qdisc change dev eth0 root cake bandwidth $up
>> tc qdisc change dev ifb0 root cake bandwidth $down
>> done
>>
>> Which any router directly in front of the dishy can do (which is what we’ve been doing)
>>
>> But whatever magic “getstats()” would need to do is unclear from the stats we get out of it, and a better alternative would be for the dishy itself and their headends to be doing this with “BQL" backpressure.
>>
>> As for the huge reductions of latency and jitter under working load, and a vast improvement in QoE - for what we’ve been able to achieve thus far, see appendix A here:
>>
>> https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing <https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing>
>>
>> We’ve got plenty more data
>> on uploads and downloads and other forms of traffic (starlink is optimizing for ping, only, over ipv6. Sigh)…
>>
>> … and a meeting with some starlink execs at 11AM today.
>>
>> I’m pretty sure at this point we will be able to make a massive improvement in starlink’s network design very quickly, after that meeting.
>>
>>> On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan@nathan.io <mailto:nathan@nathan.io>> wrote:
>>>
>>> I invited Mike, the creator of the site (starlink.sx <http://starlink.sx/>) to join the list - he’s put a crazy amount of work in to figure out which sats are active (with advice from Jonathan McDowell), programming GSO exclusion bands, etc. His dayjob is in the ISP business.
>>>
>>> On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com <mailto:budic@onholyground.com>> wrote:
>>> https://starlink.sx <https://starlink.sx/> if you have’t seen it yet. You can locate yourself, and it will make some educated guesses about which satellite to which ground station you’re using. Interesting to see the birds change and the links move between ground stations, lots going on to make these things work.
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink <https://lists.bufferbloat.net/listinfo/starlink>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net <mailto:Starlink@lists.bufferbloat.net>
>>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 10054 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 12:16 ` Mike Puchol
2021-06-09 13:21 ` Dave Taht
@ 2021-06-09 14:12 ` Michael Richardson
2021-06-09 15:23 ` Mike Puchol
2021-06-09 15:32 ` Nathan Owens
2 siblings, 1 reply; 17+ messages in thread
From: Michael Richardson @ 2021-06-09 14:12 UTC (permalink / raw)
To: Mike Puchol, Nathan Owens, Dave Taht, starlink
welcome! So happy to have you.
Mike Puchol <mike@starlink.sx> wrote:
> As for the topic at hand, I understand the concept - in essence, if we
> can find what the Dishy - satellite - gateway triumvirate is doing in
> terms of capacity and buffering, we can apply that information into our
> own router’s buffer/queue strategy. I have taken a quick look at the
> document, and there is one assumption that is wrong, AFAIK, which is
> that the satellites are “bent pipes” - they actually process packets,
> which means they are an active component of buffering and queuing in
> the path (for optical links to work, satellites would necessarily have
> to process packets).
So, when we say "bent pipe", we don't mean that literally. (I know that it
was literally true at some point). We don't mean to imply that the satellite
does not reconstruct the binary content of the packet from the baud's of
symbols.
What we primarily mean is that stuff goes in one side *ALL* comes out the
other side, and that the satellite does not *at this time*, make nexthop
decisions based upon (IP) packet content. Of course there are some kind of
identifier to tell the satellite what end-user station to send data to in the
direction towards the user.
(Also, we talk about uplink/downlink from the point of view of the the end
user station. But, are there better terms from the satellite's point of
view to distinguish traffic to/from the end user?)
It is my understanding that at some point, traffic between (end-user)
stations will be possible via multiple satellite hops, and without a trip to
the ground to a routing decision. But that isn't happening right now.
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | IoT architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 14:12 ` Michael Richardson
@ 2021-06-09 15:23 ` Mike Puchol
2021-06-09 21:18 ` Michael Richardson
0 siblings, 1 reply; 17+ messages in thread
From: Mike Puchol @ 2021-06-09 15:23 UTC (permalink / raw)
To: Nathan Owens, Dave Taht, starlink, Michael Richardson
[-- Attachment #1: Type: text/plain, Size: 4115 bytes --]
Hi Michael, inline below:
On Jun 9, 2021, 16:12 +0200, Michael Richardson <mcr@sandelman.ca>, wrote:
>
> welcome! So happy to have you.
>
> Mike Puchol <mike@starlink.sx> wrote:
> > As for the topic at hand, I understand the concept - in essence, if we
> > can find what the Dishy - satellite - gateway triumvirate is doing in
> > terms of capacity and buffering, we can apply that information into our
> > own router’s buffer/queue strategy. I have taken a quick look at the
> > document, and there is one assumption that is wrong, AFAIK, which is
> > that the satellites are “bent pipes” - they actually process packets,
> > which means they are an active component of buffering and queuing in
> > the path (for optical links to work, satellites would necessarily have
> > to process packets).
>
> So, when we say "bent pipe", we don't mean that literally. (I know that it
> was literally true at some point). We don't mean to imply that the satellite
> does not reconstruct the binary content of the packet from the baud's of
> symbols.
Understood - however “bent pipe” has some implications from traditional GSO terminology. Modern satellites reconstruct the binary stream, but still act as a “bent pipe” in the traditional sense.
>
> What we primarily mean is that stuff goes in one side *ALL* comes out the
> other side, and that the satellite does not *at this time*, make nexthop
> decisions based upon (IP) packet content. Of course there are some kind of
> identifier to tell the satellite what end-user station to send data to in the
> direction towards the user.
This is correct, with a few twists once you throw in inter-satellite links. In future satellite versions, optical links will allow satellites within the same orbital plane to use each other as relays, thus providing coverage in areas not within a gateway’s coverage. This is particularly relevant in high latitudes where fiber is scarce, so the polar orbits are getting optical links first. Take the scenario where each satellite has its own gateway, the uplinks and downlinks are balanced. Now, assume the case where a satellite does not have a gateway in range, but has another satellite in range which does have a gateway link (A):
Suddenly, satellite A needs to handle the traffic from its own cells, plus the cells served by satellite B. This can go on, with multiple satellites being relayed by a single satellite - there will be limits, of course, as the Ka band gateway links do not have infinite capacity. If you take a look at my tracker, the velocity of the satellite calls for extremely frequent topology changes - Starlink has indicataed they re-calculate every 15 seconds.
Whatever mechanism Starlink uses to route traffic internally it is certainly more advanced than a modern “bent pipe” regenerative (or I have totally overestimated the capabilities!), and involves a level of packet or frame handling higher up the stack.
> (Also, we talk about uplink/downlink from the point of view of the the end
> user station. But, are there better terms from the satellite's point of
> view to distinguish traffic to/from the end user?)
In general, downlink is anything from satellite to ground, be it satellite -> gateway or satellite -> terminal, and uplink the reverse path. These are the clearest terms to use IMHO. Thus, if satellite to terminal has 75/25 DL/UL duty cycle, the satellite to gateway link will be reversed, with 25/75 DL/UL duty cycle.
> It is my understanding that at some point, traffic between (end-user)
> stations will be possible via multiple satellite hops, and without a trip to
> the ground to a routing decision. But that isn't happening right now.
That isn’t happening, but you don’t want to design a system that needs a massive upgrade further down the line once you put optical inter-satellite links into operation. It’s much easier to design, implement and use the mechanism from the get go, as it simplifies other things (traffic delivery from gateways to a specific POP via user context, for example).
Best,
Mike
[-- Attachment #2.1: Type: text/html, Size: 5491 bytes --]
[-- Attachment #2.2: Starlink beam arrangements-2.png --]
[-- Type: image/png, Size: 39962 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 12:16 ` Mike Puchol
2021-06-09 13:21 ` Dave Taht
2021-06-09 14:12 ` Michael Richardson
@ 2021-06-09 15:32 ` Nathan Owens
2021-06-09 15:46 ` David Lang
2 siblings, 1 reply; 17+ messages in thread
From: Nathan Owens @ 2021-06-09 15:32 UTC (permalink / raw)
To: Mike Puchol; +Cc: Dave Taht, starlink
[-- Attachment #1: Type: text/plain, Size: 7082 bytes --]
> At any given time, a satellite may have ~10k cells under its footprint
that it can serve, and GSO exclusions can exclude between 15% at high
latitudes and 25% near the Equator. It has three arrays for downlinks and
one array for uplinks, with 16 beams per array. Thus, it can train up to 48
beams towards the ground, and receive from up to 16 locations on the ground
at any given time. Thus, to cover each cell, there is a physical TDM
element, plus an additional multiplexing element for each terminal in each
cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the
Ka links from satellite to gateway, each spot beam is capable of delivering
~416 Mbps through each spot beam. If we assume symmetrical performance, the
16 uplink beams from the one panel would “take in” ~416 Mbps per beam.
I think based on the gateway docs from Anatel, we know this number is
probably closer to ~13-16Gbps per Ka antenna, with likely only 1 active.
Given not every user would be receiving data equally, the peak speed per
cell is probably >400Mbps, we've seen speedtests over 500Mbps.
I'm also guessing radio resource planning is pretty dynamic, or will need
to be, I doubt it's a fixed 200/70ms kinda thing.
On Wed, Jun 9, 2021 at 5:17 AM Mike Puchol <mike@starlink.sx> wrote:
> Greetings Dave,
>
> I have just been introduced into the world of bufferbloat - something I’ve
> had to deal with in my WISP, but never had a proper name for. I’m an
> aerospace engineer by origin, so please forgive my quite likely blunders in
> this space!
>
> As for the topic at hand, I understand the concept - in essence, if we can
> find what the Dishy - satellite - gateway triumvirate is doing in terms of
> capacity and buffering, we can apply that information into our own router’s
> buffer/queue strategy. I have taken a quick look at the document, and there
> is one assumption that is wrong, AFAIK, which is that the satellites are
> “bent pipes” - they actually process packets, which means they are an
> active component of buffering and queuing in the path (for optical links to
> work, satellites would necessarily have to process packets).
>
> At any given time, a satellite may have ~10k cells under its footprint
> that it can serve, and GSO exclusions can exclude between 15% at high
> latitudes and 25% near the Equator. It has three arrays for downlinks and
> one array for uplinks, with 16 beams per array. Thus, it can train up to 48
> beams towards the ground, and receive from up to 16 locations on the ground
> at any given time. Thus, to cover each cell, there is a physical TDM
> element, plus an additional multiplexing element for each terminal in each
> cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the
> Ka links from satellite to gateway, each spot beam is capable of delivering
> ~416 Mbps through each spot beam. If we assume symmetrical performance, the
> 16 uplink beams from the one panel would “take in” ~416 Mbps per beam.
>
> Thus, a satellite is effectively limited to a 75/25 duty cycle between
> downlink and uplink, as it can paint each cell three times for download,
> for every time it receives from it.
>
> We then get to the killer - how many cells can a satellite effectively
> serve? If we take 8000 cells as an average, each downlink spot beam would
> need to care for ~167 cells, and in uplink ~500 cells. Focusing on uplink,
> each cell gets ~2ms of “beam time” to transmit, which is ridiculously low.
> From my tracker simulations, certain areas are served by around 8
> satellites at any given time, so catering for satellite spacing, we could
> assume 10ms of “beam time”. Still to low, as this would only allow for ~170
> packets.
>
> My current estimate from back of the envelope calculations is that
> Starlink uses cell TDM factors per beam of 1 to 5, depending on served
> customer density under the footprint. Thus, worst case, a satellite is
> giving a cell ~200ms of downlink time and ~67ms of uplink time. A Dishy
> terminal will need to buffer traffic while it gets its turn on a spot beam.
>
> Does this make sense, also from observations?
>
> Best,
>
> Mike
> On Jun 9, 2021, 11:12 +0200, Dave Taht <davet@teklibre.net>, wrote:
>
> Dear Mike:
>
> The biggest thing we need is something that can pull the right stats off
> the dishy and dynamically adjust cake (on outbound at least) to have the
> right amount of buffering for the available bandwidth, so as to make for
> better statistical multiplexing (FQ) and active queue management (AQM)
>
> It’s pretty simple: in mangled shell script syntax:
>
> while up, down = getstats()
> do
> tc qdisc change dev eth0 root cake bandwidth $up
> tc qdisc change dev ifb0 root cake bandwidth $down
> done
>
> Which any router directly in front of the dishy can do (which is what
> we’ve been doing)
>
> But whatever magic “getstats()” would need to do is unclear from the stats
> we get out of it, and a better alternative would be for the dishy itself
> and their headends to be doing this with “BQL" backpressure.
>
> As for the huge reductions of latency and jitter under working load, and a
> vast improvement in QoE - for what we’ve been able to achieve thus far, see
> appendix A here:
>
>
> https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing
>
> We’ve got plenty more data
> on uploads and downloads and other forms of traffic (starlink is
> optimizing for ping, only, over ipv6. Sigh)…
>
> … and a meeting with some starlink execs at 11AM today.
>
> I’m pretty sure at this point we will be able to make a massive
> improvement in starlink’s network design very quickly, after that meeting.
>
> On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan@nathan.io> wrote:
>
> I invited Mike, the creator of the site (starlink.sx) to join the list -
> he’s put a crazy amount of work in to figure out which sats are active
> (with advice from Jonathan McDowell), programming GSO exclusion bands, etc.
> His dayjob is in the ISP business.
>
> On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com>
> wrote:
>
>> https://starlink.sx if you have’t seen it yet. You can locate yourself,
>> and it will make some educated guesses about which satellite to which
>> ground station you’re using. Interesting to see the birds change and the
>> links move between ground stations, lots going on to make these things work.
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
[-- Attachment #2: Type: text/html, Size: 9158 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 15:32 ` Nathan Owens
@ 2021-06-09 15:46 ` David Lang
0 siblings, 0 replies; 17+ messages in thread
From: David Lang @ 2021-06-09 15:46 UTC (permalink / raw)
To: Nathan Owens; +Cc: Mike Puchol, starlink
[-- Attachment #1: Type: text/plain, Size: 8447 bytes --]
remember, perfect is the enemy of good enough.
In an ideal world, we would know the throughput available as it changes, but in
a shared medium environment, that requires knoing the future behavior of other
stations that we are not in contact with.
In practice, we can control the dishy sending side, and try and convince SpaceX
to control the ground station sending side (and eventually take the intermediate
hop congestion into account, something like ECN signalling to push back on the
senders would potentially work, the key is making whatever is the bottleneck
tell senders to slow down in some way)
To do this, ideally dishy would have AQM and manage the buffers there, if we
have to do it in the router before dishy, then we need to either have minimal
buffers in dishy, or get regular feedback on what the buffer sizes on dishy are
(i.e. do we need to slow down, or can we speed up a bit)
at the moment, all we can do is make some estimates of throughput and hard-code
those, if they are too low, we leave airtime idle, if they are too high, buffers
build up.
On Wed, 9 Jun 2021, Nathan Owens
wrote:
> Date: Wed, 9 Jun 2021 08:32:46 -0700
> From: Nathan Owens <nathan@nathan.io>
> To: Mike Puchol <mike@starlink.sx>
> Cc: starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] dynamically adjusting cake to starlink
>
>> At any given time, a satellite may have ~10k cells under its footprint
> that it can serve, and GSO exclusions can exclude between 15% at high
> latitudes and 25% near the Equator. It has three arrays for downlinks and
> one array for uplinks, with 16 beams per array. Thus, it can train up to 48
> beams towards the ground, and receive from up to 16 locations on the ground
> at any given time. Thus, to cover each cell, there is a physical TDM
> element, plus an additional multiplexing element for each terminal in each
> cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the
> Ka links from satellite to gateway, each spot beam is capable of delivering
> ~416 Mbps through each spot beam. If we assume symmetrical performance, the
> 16 uplink beams from the one panel would “take in” ~416 Mbps per beam.
>
> I think based on the gateway docs from Anatel, we know this number is
> probably closer to ~13-16Gbps per Ka antenna, with likely only 1 active.
> Given not every user would be receiving data equally, the peak speed per
> cell is probably >400Mbps, we've seen speedtests over 500Mbps.
> I'm also guessing radio resource planning is pretty dynamic, or will need
> to be, I doubt it's a fixed 200/70ms kinda thing.
>
>
> On Wed, Jun 9, 2021 at 5:17 AM Mike Puchol <mike@starlink.sx> wrote:
>
>> Greetings Dave,
>>
>> I have just been introduced into the world of bufferbloat - something I’ve
>> had to deal with in my WISP, but never had a proper name for. I’m an
>> aerospace engineer by origin, so please forgive my quite likely blunders in
>> this space!
>>
>> As for the topic at hand, I understand the concept - in essence, if we can
>> find what the Dishy - satellite - gateway triumvirate is doing in terms of
>> capacity and buffering, we can apply that information into our own router’s
>> buffer/queue strategy. I have taken a quick look at the document, and there
>> is one assumption that is wrong, AFAIK, which is that the satellites are
>> “bent pipes” - they actually process packets, which means they are an
>> active component of buffering and queuing in the path (for optical links to
>> work, satellites would necessarily have to process packets).
>>
>> At any given time, a satellite may have ~10k cells under its footprint
>> that it can serve, and GSO exclusions can exclude between 15% at high
>> latitudes and 25% near the Equator. It has three arrays for downlinks and
>> one array for uplinks, with 16 beams per array. Thus, it can train up to 48
>> beams towards the ground, and receive from up to 16 locations on the ground
>> at any given time. Thus, to cover each cell, there is a physical TDM
>> element, plus an additional multiplexing element for each terminal in each
>> cell. If we assume a satellite has a capacity of ~20 Gbps, limited by the
>> Ka links from satellite to gateway, each spot beam is capable of delivering
>> ~416 Mbps through each spot beam. If we assume symmetrical performance, the
>> 16 uplink beams from the one panel would “take in” ~416 Mbps per beam.
>>
>> Thus, a satellite is effectively limited to a 75/25 duty cycle between
>> downlink and uplink, as it can paint each cell three times for download,
>> for every time it receives from it.
>>
>> We then get to the killer - how many cells can a satellite effectively
>> serve? If we take 8000 cells as an average, each downlink spot beam would
>> need to care for ~167 cells, and in uplink ~500 cells. Focusing on uplink,
>> each cell gets ~2ms of “beam time” to transmit, which is ridiculously low.
>> From my tracker simulations, certain areas are served by around 8
>> satellites at any given time, so catering for satellite spacing, we could
>> assume 10ms of “beam time”. Still to low, as this would only allow for ~170
>> packets.
>>
>> My current estimate from back of the envelope calculations is that
>> Starlink uses cell TDM factors per beam of 1 to 5, depending on served
>> customer density under the footprint. Thus, worst case, a satellite is
>> giving a cell ~200ms of downlink time and ~67ms of uplink time. A Dishy
>> terminal will need to buffer traffic while it gets its turn on a spot beam.
>>
>> Does this make sense, also from observations?
>>
>> Best,
>>
>> Mike
>> On Jun 9, 2021, 11:12 +0200, Dave Taht <davet@teklibre.net>, wrote:
>>
>> Dear Mike:
>>
>> The biggest thing we need is something that can pull the right stats off
>> the dishy and dynamically adjust cake (on outbound at least) to have the
>> right amount of buffering for the available bandwidth, so as to make for
>> better statistical multiplexing (FQ) and active queue management (AQM)
>>
>> It’s pretty simple: in mangled shell script syntax:
>>
>> while up, down = getstats()
>> do
>> tc qdisc change dev eth0 root cake bandwidth $up
>> tc qdisc change dev ifb0 root cake bandwidth $down
>> done
>>
>> Which any router directly in front of the dishy can do (which is what
>> we’ve been doing)
>>
>> But whatever magic “getstats()” would need to do is unclear from the stats
>> we get out of it, and a better alternative would be for the dishy itself
>> and their headends to be doing this with “BQL" backpressure.
>>
>> As for the huge reductions of latency and jitter under working load, and a
>> vast improvement in QoE - for what we’ve been able to achieve thus far, see
>> appendix A here:
>>
>>
>> https://docs.google.com/document/d/1rVGC-iNq2NZ0jk4f3IAiVUHz2S9O6P-F3vVZU2yBYtw/edit?usp=sharing
>>
>> We’ve got plenty more data
>> on uploads and downloads and other forms of traffic (starlink is
>> optimizing for ping, only, over ipv6. Sigh)…
>>
>> … and a meeting with some starlink execs at 11AM today.
>>
>> I’m pretty sure at this point we will be able to make a massive
>> improvement in starlink’s network design very quickly, after that meeting.
>>
>> On Jun 8, 2021, at 2:54 PM, Nathan Owens <nathan@nathan.io> wrote:
>>
>> I invited Mike, the creator of the site (starlink.sx) to join the list -
>> he’s put a crazy amount of work in to figure out which sats are active
>> (with advice from Jonathan McDowell), programming GSO exclusion bands, etc.
>> His dayjob is in the ISP business.
>>
>> On Sat, Jun 5, 2021 at 8:31 PM Darrell Budic <budic@onholyground.com>
>> wrote:
>>
>>> https://starlink.sx if you have’t seen it yet. You can locate yourself,
>>> and it will make some educated guesses about which satellite to which
>>> ground station you’re using. Interesting to see the birds change and the
>>> links move between ground stations, lots going on to make these things work.
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 10:20 ` Mikael Abrahamsson
@ 2021-06-09 16:39 ` Michael Richardson
2021-06-09 18:10 ` David Lang
0 siblings, 1 reply; 17+ messages in thread
From: Michael Richardson @ 2021-06-09 16:39 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Dave Taht, starlink
[-- Attachment #1: Type: text/plain, Size: 855 bytes --]
Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> Would be nice if there was a generic mechanism for this but since several of
> these devices use L2 I think it'd have to be something LLDP like (also on
> L2).
Yes, that makes sense, or using a PPP LCP attribute for DSL links.
> Dave, how often does information regarding rate/scheduler need to be
> distributed from the scheduling node to the node that is trying to not use
> the upstream buffer? I presume this is in the 0.1 to 1s range, because the
> scheduler might change quite frequently and substantially?
I wonder if LLDP like updates that frequently.
I'm told that on multi-port (like 48) ethernet switches, that the LLDP
channel from fabric to control plane is rate limited to around O(10) packets/s!
This might be irrelevant to the applications we are imagining.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 16:39 ` Michael Richardson
@ 2021-06-09 18:10 ` David Lang
0 siblings, 0 replies; 17+ messages in thread
From: David Lang @ 2021-06-09 18:10 UTC (permalink / raw)
To: Michael Richardson; +Cc: Mikael Abrahamsson, starlink
an earlier post said dishy evaluates it's throughput 15 times per second, one
packet can provide the info, so O(10)per second is not unreasonable
David Lang
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 15:23 ` Mike Puchol
@ 2021-06-09 21:18 ` Michael Richardson
2021-06-09 21:36 ` Nathan Owens
0 siblings, 1 reply; 17+ messages in thread
From: Michael Richardson @ 2021-06-09 21:18 UTC (permalink / raw)
To: Mike Puchol; +Cc: Nathan Owens, Dave Taht, starlink
[-- Attachment #1: Type: text/plain, Size: 2208 bytes --]
Mike Puchol <mike@starlink.sx> wrote:
> This is correct, with a few twists once you throw in inter-satellite
> links. In future satellite versions, optical
> links will allow satellites within the same orbital plane to use each
> other as relays, thus providing coverage in areas not within a
> gateway’s coverage.
This would seem to wind up overloading the downlink to the gateway, as well
as causing hard to predict fluctuations in bandwidth. This is definitely a
complex situation where I can see buffers being added looks like a good
cure-all.
Allowing for direct terminal to terminal traffic would ultimately help
as many of the latency sensitive things like gaming and video calls are often
rather local.
mcr> (Also, we talk about uplink/downlink from the point of view of the the end
mcr> user station. But, are there better terms from the satellite's point of
mcr> view to distinguish traffic to/from the end user?)
> In general, downlink is anything from satellite to ground, be it
> satellite -> gateway or satellite -> terminal, and uplink the reverse
> path. These are the clearest terms to use IMHO. Thus, if satellite to
> terminal has 75/25 DL/UL duty cycle, the satellite to gateway link will
> be reversed, with 25/75 DL/UL duty cycle.
Yeah, so in order to speak usefully about some of this stuff, I think we need
to distinguish between traffic going "up" which is going towards the Gateway,
from traffic which might be going "up" from the Gateway (or across from
another satellite). Some additional terms would help. I had hoped that
there were some :-)
Do you know how traffic is being steered? I.e. how does the Gateway say
which terminal traffic is to? All we know is that tweet "Simpler than IPv6"
Some kind of SDN, but based upon what kind of discriminators?
Are there circuits involved (ala ATM or PPPoE), tags like MPLS or 802.1Q?
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | IoT architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 21:18 ` Michael Richardson
@ 2021-06-09 21:36 ` Nathan Owens
2021-06-09 23:37 ` Mike Puchol
0 siblings, 1 reply; 17+ messages in thread
From: Nathan Owens @ 2021-06-09 21:36 UTC (permalink / raw)
To: Michael Richardson; +Cc: Mike Puchol, Dave Taht, starlink
[-- Attachment #1: Type: text/plain, Size: 3280 bytes --]
> This would seem to wind up overloading the downlink to the gateway, as
well
as causing hard to predict fluctuations in bandwidth. This is definitely a
complex situation where I can see buffers being added looks like a good
cure-all.
We'll find out soon enough... Polar launches start in July.
> Do you know how traffic is being steered? I.e. how does the Gateway say
which terminal traffic is to? All we know is that tweet "Simpler than IPv6"
Some kind of SDN, but based upon what kind of discriminators?
Are there circuits involved (ala ATM or PPPoE), tags like MPLS or 802.1Q?
My intuition would be that traffic is encap'd when it enters a PoP based on
the destination IP, and the state of the constellation. The encapsulation
could just be MPLS with a segment routing-like approach, it would contain
the desired gateway, satellite, and terminal ID.
On Wed, Jun 9, 2021 at 2:18 PM Michael Richardson <mcr@sandelman.ca> wrote:
>
> Mike Puchol <mike@starlink.sx> wrote:
> > This is correct, with a few twists once you throw in inter-satellite
> > links. In future satellite versions, optical
> > links will allow satellites within the same orbital plane to use each
> > other as relays, thus providing coverage in areas not within a
> > gateway’s coverage.
>
> This would seem to wind up overloading the downlink to the gateway, as well
> as causing hard to predict fluctuations in bandwidth. This is definitely
> a
> complex situation where I can see buffers being added looks like a good
> cure-all.
>
> Allowing for direct terminal to terminal traffic would ultimately help
> as many of the latency sensitive things like gaming and video calls are
> often
> rather local.
>
> mcr> (Also, we talk about uplink/downlink from the point of view of
> the the end
> mcr> user station. But, are there better terms from the satellite's
> point of
> mcr> view to distinguish traffic to/from the end user?)
>
> > In general, downlink is anything from satellite to ground, be it
> > satellite -> gateway or satellite -> terminal, and uplink the reverse
> > path. These are the clearest terms to use IMHO. Thus, if satellite to
> > terminal has 75/25 DL/UL duty cycle, the satellite to gateway link
> will
> > be reversed, with 25/75 DL/UL duty cycle.
>
> Yeah, so in order to speak usefully about some of this stuff, I think we
> need
> to distinguish between traffic going "up" which is going towards the
> Gateway,
> from traffic which might be going "up" from the Gateway (or across from
> another satellite). Some additional terms would help. I had hoped that
> there were some :-)
>
> Do you know how traffic is being steered? I.e. how does the Gateway say
> which terminal traffic is to? All we know is that tweet "Simpler than
> IPv6"
> Some kind of SDN, but based upon what kind of discriminators?
> Are there circuits involved (ala ATM or PPPoE), tags like MPLS or 802.1Q?
>
> --
> ] Never tell me the odds! | ipv6 mesh
> networks [
> ] Michael Richardson, Sandelman Software Works | IoT
> architect [
> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on
> rails [
>
>
[-- Attachment #2: Type: text/html, Size: 4303 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [Starlink] dynamically adjusting cake to starlink
2021-06-09 21:36 ` Nathan Owens
@ 2021-06-09 23:37 ` Mike Puchol
0 siblings, 0 replies; 17+ messages in thread
From: Mike Puchol @ 2021-06-09 23:37 UTC (permalink / raw)
To: Michael Richardson, Nathan Owens; +Cc: Dave Taht, starlink
[-- Attachment #1: Type: text/plain, Size: 4706 bytes --]
We really don’t have a lot of information as to how traffic is encapsulated and switched/routed from POP to gateway to satellite to terminal. I believe there is no need for anyone to reinvent the wheel when there are existing protocols that can be used for this sort of thing.
I’m placing my bets on cellular networks, where the concept of LTE UE Context matches quite a few things Starlink would be required to do, such as dynamically change allocation of radio resources, and mapping a subscriber to a physical cell and logical channel structure.
There is not a lot Starlink can do to predict network traffic, they can only hope that as the subscriber base grows, the peaks will “even out”, as we see in our networks. Here are some 8,000 CPEs at 4Mbps rate limit:
Traffic shifts, other than due to a dramatic events (a whole area losing power), are in the 1-5% region. In contrast, this is 220 CPEs at the same 4Mbps rate limit, where the shifts are in the 20-35% range:
Thus, Starlink needs to place more efforts into predicting spot beam traffic patterns than, for example, gateway to POP.
On Jun 9, 2021, 23:37 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> > This would seem to wind up overloading the downlink to the gateway, as well
> as causing hard to predict fluctuations in bandwidth. This is definitely a
> complex situation where I can see buffers being added looks like a good
> cure-all.
>
> We'll find out soon enough... Polar launches start in July.
>
> > Do you know how traffic is being steered? I.e. how does the Gateway say
> which terminal traffic is to? All we know is that tweet "Simpler than IPv6"
> Some kind of SDN, but based upon what kind of discriminators?
> Are there circuits involved (ala ATM or PPPoE), tags like MPLS or 802.1Q?
>
> My intuition would be that traffic is encap'd when it enters a PoP based on the destination IP, and the state of the constellation. The encapsulation could just be MPLS with a segment routing-like approach, it would contain the desired gateway, satellite, and terminal ID.
>
> > On Wed, Jun 9, 2021 at 2:18 PM Michael Richardson <mcr@sandelman.ca> wrote:
> > >
> > > Mike Puchol <mike@starlink.sx> wrote:
> > > > This is correct, with a few twists once you throw in inter-satellite
> > > > links. In future satellite versions, optical
> > > > links will allow satellites within the same orbital plane to use each
> > > > other as relays, thus providing coverage in areas not within a
> > > > gateway’s coverage.
> > >
> > > This would seem to wind up overloading the downlink to the gateway, as well
> > > as causing hard to predict fluctuations in bandwidth. This is definitely a
> > > complex situation where I can see buffers being added looks like a good
> > > cure-all.
> > >
> > > Allowing for direct terminal to terminal traffic would ultimately help
> > > as many of the latency sensitive things like gaming and video calls are often
> > > rather local.
> > >
> > > mcr> (Also, we talk about uplink/downlink from the point of view of the the end
> > > mcr> user station. But, are there better terms from the satellite's point of
> > > mcr> view to distinguish traffic to/from the end user?)
> > >
> > > > In general, downlink is anything from satellite to ground, be it
> > > > satellite -> gateway or satellite -> terminal, and uplink the reverse
> > > > path. These are the clearest terms to use IMHO. Thus, if satellite to
> > > > terminal has 75/25 DL/UL duty cycle, the satellite to gateway link will
> > > > be reversed, with 25/75 DL/UL duty cycle.
> > >
> > > Yeah, so in order to speak usefully about some of this stuff, I think we need
> > > to distinguish between traffic going "up" which is going towards the Gateway,
> > > from traffic which might be going "up" from the Gateway (or across from
> > > another satellite). Some additional terms would help. I had hoped that
> > > there were some :-)
> > >
> > > Do you know how traffic is being steered? I.e. how does the Gateway say
> > > which terminal traffic is to? All we know is that tweet "Simpler than IPv6"
> > > Some kind of SDN, but based upon what kind of discriminators?
> > > Are there circuits involved (ala ATM or PPPoE), tags like MPLS or 802.1Q?
> > >
> > > --
> > > ] Never tell me the odds! | ipv6 mesh networks [
> > > ] Michael Richardson, Sandelman Software Works | IoT architect [
> > > ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
> > >
[-- Attachment #2.1: Type: text/html, Size: 6511 bytes --]
[-- Attachment #2.2: Screen Shot 2021-06-10 at 01.27.16.png --]
[-- Type: image/png, Size: 42051 bytes --]
[-- Attachment #2.3: Screen Shot 2021-06-10 at 01.27.35.png --]
[-- Type: image/png, Size: 68598 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2021-06-09 23:37 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-06 3:31 [Starlink] pretty cool starlink visualizer Darrell Budic
2021-06-06 4:26 ` David Lang
2021-06-08 21:54 ` Nathan Owens
2021-06-09 9:12 ` [Starlink] dynamically adjusting cake to starlink Dave Taht
2021-06-09 10:20 ` Mikael Abrahamsson
2021-06-09 16:39 ` Michael Richardson
2021-06-09 18:10 ` David Lang
2021-06-09 12:09 ` Nathan Owens
2021-06-09 12:16 ` Mike Puchol
2021-06-09 13:21 ` Dave Taht
2021-06-09 14:12 ` Michael Richardson
2021-06-09 15:23 ` Mike Puchol
2021-06-09 21:18 ` Michael Richardson
2021-06-09 21:36 ` Nathan Owens
2021-06-09 23:37 ` Mike Puchol
2021-06-09 15:32 ` Nathan Owens
2021-06-09 15:46 ` David Lang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox