Starlink has bufferbloat. Bad.
 help / color / mirror / Atom feed
* [Starlink] Starlink and bufferbloat status?
       [not found] <mailman.3.1625846401.13780.starlink@lists.bufferbloat.net>
@ 2021-07-09 18:40 ` David P. Reed
  2021-07-09 18:45   ` Nathan Owens
                     ` (3 more replies)
  0 siblings, 4 replies; 37+ messages in thread
From: David P. Reed @ 2021-07-09 18:40 UTC (permalink / raw)
  To: starlink

[-- Attachment #1: Type: text/plain, Size: 2266 bytes --]


Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown.
 
But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC,
 
So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that gives a true number.
 
84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing quality). But it is better than Dave's measurements showed.
 
Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet)
 
So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied.
 
 
PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public Internet is not a problem.
 
 

[-- Attachment #2: Type: text/html, Size: 3930 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 18:40 ` [Starlink] Starlink and bufferbloat status? David P. Reed
@ 2021-07-09 18:45   ` Nathan Owens
  2021-07-09 19:08   ` Ben Greear
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 37+ messages in thread
From: Nathan Owens @ 2021-07-09 18:45 UTC (permalink / raw)
  To: David P. Reed; +Cc: starlink

[-- Attachment #1: Type: text/plain, Size: 2825 bytes --]

I haven't done detailed testing, but anecdotally, there haven't been any
changes I've noticed. A few times, it's seemed worse, with latency
increasing to 700-900ms for several seconds after starting an upload,
before returning to ~30-150ms.

On Fri, Jul 9, 2021 at 11:40 AM David P. Reed <dpreed@deepplum.com> wrote:

> Early measurements of performance of Starlink have shown significant
> bufferbloat, as Dave Taht has shown.
>
>
>
> But...  Starlink is a moving target. The bufferbloat isn't a hardware
> issue, it should be completely manageable, starting by simple firmware
> changes inside the Starlink system itself. For example, implementing
> fq_codel so that bottleneck links just drop packets according to the Best
> Practices RFC,
>
>
>
> So I'm hoping this has improved since Dave's measurements. How much has it
> improved? What's the current maximum packet latency under full load,  Ive
> heard anecdotally that a friend of a friend gets 84 msec. *ping times under
> full load*, but he wasn't using flent or some other measurement tool of
> good quality that gives a true number.
>
>
>
> 84 msec is not great - it's marginal for Zoom quality experience (you want
> latencies significantly less than 100 msec. as a rule of thumb for
> teleconferencing quality). But it is better than Dave's measurements showed.
>
>
>
> Now Musk bragged that his network was "low latency" unlike other high
> speed services, which means low end-to-end latency.  That got him
> permission from the FCC to operate Starlink at all. His number was, I
> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> probably meant just the time from the ground station to the terminal
> through the satellite. But I regularly get 17 msec. between California and
> Massachusetts over the public Internet)
>
>
>
> So 84 might be the current status. That would mean that someone at
> Srarlink might be paying some attention, but it is a long way from what
> Musk implied.
>
>
>
>
>
> PS: I forget the number of the RFC, but the number of packets queued on an
> egress link should be chosen by taking the hardware bottleneck throughput
> of any path, combined with an end-to-end Internet underlying delay of about
> 10 msec. to account for hops between source and destination. Lets say
> Starlink allocates 50 Mb/sec to each customer, packets are limited to
> 10,000 bits (1500 * 8), so the outbound queues should be limited to about
> 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from each
> terminal of buffering, total, in the path from terminal to public Internet,
> assuming the connection to the public Internet is not a problem.
>
>
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 4442 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 18:40 ` [Starlink] Starlink and bufferbloat status? David P. Reed
  2021-07-09 18:45   ` Nathan Owens
@ 2021-07-09 19:08   ` Ben Greear
  2021-07-09 20:08   ` Dick Roy
  2021-07-09 22:58   ` David Lang
  3 siblings, 0 replies; 37+ messages in thread
From: Ben Greear @ 2021-07-09 19:08 UTC (permalink / raw)
  To: starlink

On 7/9/21 11:40 AM, David P. Reed wrote:
> Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown.
> 
> But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the 
> Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC,
> 
> So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard 
> anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that 
> gives a true number.
> 
> 84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing 
> quality). But it is better than Dave's measurements showed.
> 
> Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC 
> to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the 
> ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet)
> 
> So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied.
> 
> PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any 
> path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 
> Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which 
> comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public 
> Internet is not a problem.

There is no need to queue more than a single frame IF you can efficiently transmit a single frame and if you can be fed new frames
as quick as you want them.  Wifi cannot do either of these things, of course, and probably not the dish either,
so you will need to buffer some stuff.  For WiFi, for best throughput, you want to send larger AMPDU chains, so you may want
to buffer per TID and per user up to 64 or so frames.  That is too much buffering if you have 100 stations each using 4 tids
though, so then you start making tradeoffs of throughput vs latency, maybe force all frames to same station onto same TID for better
aggregation, etc).

There is no perfect answer to this in general.  If you are trying to just stream movies over wifi to people on a plane, then
latency matters not much at all and you use all the buffers you can.  If you have a call center using VOIP over wifi then
tput doesn't matter much and instead you optimize for latency.  And for everyone else, you pick something in
the middle.

Queueing in the AP and dish shouldn't care at all about total latency,
that is more of a TCP windowing issue.  TCP should definitely care about total latency.

And this is all my opinion of course...

Thanks,
Ben



-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 18:40 ` [Starlink] Starlink and bufferbloat status? David P. Reed
  2021-07-09 18:45   ` Nathan Owens
  2021-07-09 19:08   ` Ben Greear
@ 2021-07-09 20:08   ` Dick Roy
  2021-07-09 22:58   ` David Lang
  3 siblings, 0 replies; 37+ messages in thread
From: Dick Roy @ 2021-07-09 20:08 UTC (permalink / raw)
  To: 'David P. Reed', starlink

[-- Attachment #1: Type: text/plain, Size: 2874 bytes --]

 

 

  _____  

From: Starlink [mailto:starlink-bounces@lists.bufferbloat.net] On Behalf Of
David P. Reed
Sent: Friday, July 9, 2021 11:40 AM
To: starlink@lists.bufferbloat.net
Subject: [Starlink] Starlink and bufferbloat status?

 

Early measurements of performance of Starlink have shown significant
bufferbloat, as Dave Taht has shown.

 

But...  Starlink is a moving target. The bufferbloat isn't a hardware issue,
it should be completely manageable, starting by simple firmware changes
inside the Starlink system itself. For example, implementing fq_codel so
that bottleneck links just drop packets according to the Best Practices RFC,

 

So I'm hoping this has improved since Dave's measurements. How much has it
improved? What's the current maximum packet latency under full load,  Ive
heard anecdotally that a friend of a friend gets 84 msec. *ping times under
full load*, but he wasn't using flent or some other measurement tool of good
quality that gives a true number.

 

84 msec is not great - it's marginal for Zoom quality experience (you want
latencies significantly less than 100 msec. as a rule of thumb for
teleconferencing quality). But it is better than Dave's measurements showed.

 

Now Musk bragged that his network was "low latency" unlike other high speed
services, which means low end-to-end latency.  That got him permission from
the FCC to operate Starlink at all. His number was, I think, < 5 msec. 84 is
a lot more than 5. (I didn't believe 5, because he probably meant just the
time from the ground station to the terminal through the satellite. 

[RR] So you are saying Musk might have used "artistic license" to get a
license out of the FCC??/ Shocking!  Never heard that before!  I thought
everyone simply told the FCC the truth and got licenses based on technical
merit, not marketing BS! Well, as the old saying goes: "If you can't dazzle
them with your brilliance, baffle them with your bullshit!"

 

:^))))

 

But I regularly get 17 msec. between California and Massachusetts over the
public Internet)

 

So 84 might be the current status. That would mean that someone at Srarlink
might be paying some attention, but it is a long way from what Musk implied.

 

 

PS: I forget the number of the RFC, but the number of packets queued on an
egress link should be chosen by taking the hardware bottleneck throughput of
any path, combined with an end-to-end Internet underlying delay of about 10
msec. to account for hops between source and destination. Lets say Starlink
allocates 50 Mb/sec to each customer, packets are limited to 10,000 bits
(1500 * 8), so the outbound queues should be limited to about 0.01 *
50,000,000 / 10,000, which comes out to about 250 packets from each terminal
of buffering, total, in the path from terminal to public Internet, assuming
the connection to the public Internet is not a problem.

 

 


[-- Attachment #2: Type: text/html, Size: 10038 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 18:40 ` [Starlink] Starlink and bufferbloat status? David P. Reed
                     ` (2 preceding siblings ...)
  2021-07-09 20:08   ` Dick Roy
@ 2021-07-09 22:58   ` David Lang
  2021-07-09 23:07     ` Daniel AJ Sokolov
  2021-07-16 10:21     ` Wheelock, Ian
  3 siblings, 2 replies; 37+ messages in thread
From: David Lang @ 2021-07-09 22:58 UTC (permalink / raw)
  To: David P. Reed; +Cc: starlink

IIRC, the definition of 'low latency' for the FCC was something like 100ms, and 
Musk was predicting <40ms.

roughly competitive with landlines, and worlds better than geostationary 
satellite (and many wireless ISPs)

but when doing any serious testing of latency, you need to be wired to the 
router, wifi introduces so much variability that it swamps the signal.

David Lang

On Fri, 9 Jul 2021, David P. Reed wrote:

> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> From: David P. Reed <dpreed@deepplum.com>
> To: starlink@lists.bufferbloat.net
> Subject: [Starlink] Starlink and bufferbloat status?
> 
>
> Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown.
> 
> But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC,
> 
> So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that gives a true number.
> 
> 84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing quality). But it is better than Dave's measurements showed.
> 
> Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet)
> 
> So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied.
> 
> 
> PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public Internet is not a problem.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 22:58   ` David Lang
@ 2021-07-09 23:07     ` Daniel AJ Sokolov
  2021-07-10 15:58       ` Dave Taht
  2021-07-16 10:21     ` Wheelock, Ian
  1 sibling, 1 reply; 37+ messages in thread
From: Daniel AJ Sokolov @ 2021-07-09 23:07 UTC (permalink / raw)
  To: starlink

On 2021-07-09 at 3:58 p.m., David Lang wrote:
> IIRC, the definition of 'low latency' for the FCC was something like 
> 100ms, and Musk was predicting <40ms.

Indeed, the Rural Digital Opportunity Fund Auction defined "Low Latency" 
as "<100 ms", and "High Latency" as "≤ 750 ms & MOS of ≥4".

MOS is the Mean Opinion Score, which takes latency and jitter into 
account. For "Low Latency", jitter/MOS was not defined. MOS goes from 1 
(Bad) to 5 (Excellent).

https://www.fcc.gov/auction/904/factsheet

FYI
Daniel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 23:07     ` Daniel AJ Sokolov
@ 2021-07-10 15:58       ` Dave Taht
  0 siblings, 0 replies; 37+ messages in thread
From: Dave Taht @ 2021-07-10 15:58 UTC (permalink / raw)
  To: Daniel AJ Sokolov; +Cc: starlink

[-- Attachment #1: Type: text/plain, Size: 1915 bytes --]

On Fri, Jul 9, 2021 at 4:07 PM Daniel AJ Sokolov <daniel@sokolov.eu.org> wrote:
>
> On 2021-07-09 at 3:58 p.m., David Lang wrote:
> > IIRC, the definition of 'low latency' for the FCC was something like
> > 100ms, and Musk was predicting <40ms.
>
> Indeed, the Rural Digital Opportunity Fund Auction defined "Low Latency"
> as "<100 ms", and "High Latency" as "≤ 750 ms & MOS of ≥4".

Except that it didn't define it under "normal working conditions" - as
in having any load on
that network, nor did it recognize that above 40Mbit, the bufferbloat
generally moves to the
wifi.

(not just to pick on starlink, most wifi setups are pretty broken, as
are most isps,
 as soon as you put a load on the network, even a single upload flow
messes up voip and videoconferencing)

so my hope is to move the FCC bar up in the coming months (I was just invited
to join BITAG). I do hope that we see better routers being mandated as a result
in the next funding round.

> MOS is the Mean Opinion Score, which takes latency and jitter into
> account. For "Low Latency", jitter/MOS was not defined. MOS goes from 1
> (Bad) to 5 (Excellent).

MOS is one of my favorite metrics, along with page load time.

The attached graph shows what we now achieve with airtime fairness and
the fq_codel scheduler and AQM
algorithms, in 5 shipping wifi chipsets, under load, and with multiple
stations present. From a MOS of 1, to over 4.
From the paper:

https://www.usenix.org/system/files/conference/atc17/atc17-hoiland-jorgensen.pdf

>
> https://www.fcc.gov/auction/904/factsheet
>
> FYI
> Daniel
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

[-- Attachment #2: fq_codel_wifi_mos.png --]
[-- Type: image/png, Size: 57583 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-09 22:58   ` David Lang
  2021-07-09 23:07     ` Daniel AJ Sokolov
@ 2021-07-16 10:21     ` Wheelock, Ian
  2021-07-16 17:08       ` David Lang
  1 sibling, 1 reply; 37+ messages in thread
From: Wheelock, Ian @ 2021-07-16 10:21 UTC (permalink / raw)
  To: David Lang, David P. Reed; +Cc: starlink

[-- Attachment #1: Type: text/plain, Size: 5146 bytes --]

Hi David
In terms of the Latency that David (Reed) mentioned for California to Massachusetts of about 17ms over the public internet, it seems a bit faster than what I would expect. My own traceroute via my VDSL link shows 14ms just to get out of the operator network. 

https://www.wondernetwork.com  is a handy tool for checking geographic ping perf between cities, and it shows a min of about 66ms for pings between Boston and San Diego https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for 1-way transfer). 

Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light (through a pure fibre link of that distance) the propagation time is just over 20ms. If the network equipment between the Boston and San Diego is factored in, with some buffering along the way, 33ms does seem quite reasonable over the 20ms for speed of light in fibre for that 1-way transfer 

-Ian Wheelock

From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of David Lang <david@lang.hm>
Date: Friday 9 July 2021 at 23:59
To: "David P. Reed" <dpreed@deepplum.com>
Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] Starlink and bufferbloat status?

IIRC, the definition of 'low latency' for the FCC was something like 100ms, and Musk was predicting <40ms. roughly competitive with landlines, and worlds better than geostationary satellite (and many 
External (mailto:david@lang.hm) 
  https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc  https://www.inky.com/banner-faq/  https://www.inky.com 

IIRC, the definition of 'low latency' for the FCC was something like 100ms, and 
Musk was predicting <40ms.
 
roughly competitive with landlines, and worlds better than geostationary 
satellite (and many wireless ISPs)
 
but when doing any serious testing of latency, you need to be wired to the 
router, wifi introduces so much variability that it swamps the signal.
 
David Lang
 
On Fri, 9 Jul 2021, David P. Reed wrote:
 
> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> From: David P. Reed <dpreed@deepplum.com>
> To: starlink@lists.bufferbloat.net
> Subject: [Starlink] Starlink and bufferbloat status?
> 
>
> Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown.
> 
> But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC,
> 
> So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that gives a true number.
> 
> 84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing quality). But it is better than Dave's measurements showed.
> 
> Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet)
> 
> So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied.
> 
> 
> PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public Internet is not a problem.
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink


[-- Attachment #2: image001.png --]
[-- Type: image/png, Size: 4138 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 10:21     ` Wheelock, Ian
@ 2021-07-16 17:08       ` David Lang
  2021-07-16 17:13         ` Nathan Owens
  0 siblings, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-16 17:08 UTC (permalink / raw)
  To: Wheelock, Ian; +Cc: David Lang, David P. Reed, starlink

[-- Attachment #1: Type: text/plain, Size: 6199 bytes --]

I think it depends on if you are looking at datacenter-to-datacenter latency of 
home to remote datacenter latency :-)

my rule of thumb for cross US ping time has been 80-100ms latency (but it's been 
a few years since I tested it).

I note that an article I saw today said that Elon is saying that latency will 
improve significantly in the near future, that up/down latency is ~20ms and the 
additional delays pushing it to the 80ms range are 'stupid packet routing' 
problems that they are working on.

If they are still in that level of optimization, it doesn't surprise me that 
they haven't really focused on the bufferbloat issue, they have more obvious 
stuff to fix first.

David Lang


  On Fri, 16 Jul 2021, Wheelock, Ian wrote:

> Date: Fri, 16 Jul 2021 10:21:52 +0000
> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
> 
> Hi David
> In terms of the Latency that David (Reed) mentioned for California to Massachusetts of about 17ms over the public internet, it seems a bit faster than what I would expect. My own traceroute via my VDSL link shows 14ms just to get out of the operator network.
>
> https://www.wondernetwork.com  is a handy tool for checking geographic ping perf between cities, and it shows a min of about 66ms for pings between Boston and San Diego https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for 1-way transfer).
>
> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light (through a pure fibre link of that distance) the propagation time is just over 20ms. If the network equipment between the Boston and San Diego is factored in, with some buffering along the way, 33ms does seem quite reasonable over the 20ms for speed of light in fibre for that 1-way transfer
>
> -Ian Wheelock
>
> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of David Lang <david@lang.hm>
> Date: Friday 9 July 2021 at 23:59
> To: "David P. Reed" <dpreed@deepplum.com>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
>
> IIRC, the definition of 'low latency' for the FCC was something like 100ms, and Musk was predicting <40ms. roughly competitive with landlines, and worlds better than geostationary satellite (and many
> External (mailto:david@lang.hm)
>   https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc  https://www.inky.com/banner-faq/  https://www.inky.com
>
> IIRC, the definition of 'low latency' for the FCC was something like 100ms, and
> Musk was predicting <40ms.
>  
> roughly competitive with landlines, and worlds better than geostationary
> satellite (and many wireless ISPs)
>  
> but when doing any serious testing of latency, you need to be wired to the
> router, wifi introduces so much variability that it swamps the signal.
>  
> David Lang
>  
> On Fri, 9 Jul 2021, David P. Reed wrote:
>  
>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>> From: David P. Reed <dpreed@deepplum.com>
>> To: starlink@lists.bufferbloat.net
>> Subject: [Starlink] Starlink and bufferbloat status?
>>
>>
>> Early measurements of performance of Starlink have shown significant bufferbloat, as Dave Taht has shown.
>>
>> But...  Starlink is a moving target. The bufferbloat isn't a hardware issue, it should be completely manageable, starting by simple firmware changes inside the Starlink system itself. For example, implementing fq_codel so that bottleneck links just drop packets according to the Best Practices RFC,
>>
>> So I'm hoping this has improved since Dave's measurements. How much has it improved? What's the current maximum packet latency under full load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping times under full load*, but he wasn't using flent or some other measurement tool of good quality that gives a true number.
>>
>> 84 msec is not great - it's marginal for Zoom quality experience (you want latencies significantly less than 100 msec. as a rule of thumb for teleconferencing quality). But it is better than Dave's measurements showed.
>>
>> Now Musk bragged that his network was "low latency" unlike other high speed services, which means low end-to-end latency.  That got him permission from the FCC to operate Starlink at all. His number was, I think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he probably meant just the time from the ground station to the terminal through the satellite. But I regularly get 17 msec. between California and Massachusetts over the public Internet)
>>
>> So 84 might be the current status. That would mean that someone at Srarlink might be paying some attention, but it is a long way from what Musk implied.
>>
>>
>> PS: I forget the number of the RFC, but the number of packets queued on an egress link should be chosen by taking the hardware bottleneck throughput of any path, combined with an end-to-end Internet underlying delay of about 10 msec. to account for hops between source and destination. Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited to 10,000 bits (1500 * 8), so the outbound queues should be limited to about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from each terminal of buffering, total, in the path from terminal to public Internet, assuming the connection to the public Internet is not a problem.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>
>

[-- Attachment #2: image001.png --]
[-- Type: image/png, Size: 4138 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:08       ` David Lang
@ 2021-07-16 17:13         ` Nathan Owens
  2021-07-16 17:24           ` David Lang
  0 siblings, 1 reply; 37+ messages in thread
From: Nathan Owens @ 2021-07-16 17:13 UTC (permalink / raw)
  To: David Lang; +Cc: Wheelock, Ian, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 7380 bytes --]

Elon said "foolish packet routing" for things over 20ms! Which seems crazy
if you do some basic math:

   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
   - Total one-way delay: 4.3 - 11.1ms
   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms

This includes no transmission delay, queuing delay,
processing/fragmentation/reassembly/etc, and no time-division multiplexing.

On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:

> I think it depends on if you are looking at datacenter-to-datacenter
> latency of
> home to remote datacenter latency :-)
>
> my rule of thumb for cross US ping time has been 80-100ms latency (but
> it's been
> a few years since I tested it).
>
> I note that an article I saw today said that Elon is saying that latency
> will
> improve significantly in the near future, that up/down latency is ~20ms
> and the
> additional delays pushing it to the 80ms range are 'stupid packet routing'
> problems that they are working on.
>
> If they are still in that level of optimization, it doesn't surprise me
> that
> they haven't really focused on the bufferbloat issue, they have more
> obvious
> stuff to fix first.
>
> David Lang
>
>
>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>
> > Date: Fri, 16 Jul 2021 10:21:52 +0000
> > From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> > To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > Subject: Re: [Starlink] Starlink and bufferbloat status?
> >
> > Hi David
> > In terms of the Latency that David (Reed) mentioned for California to
> Massachusetts of about 17ms over the public internet, it seems a bit faster
> than what I would expect. My own traceroute via my VDSL link shows 14ms
> just to get out of the operator network.
> >
> > https://www.wondernetwork.com  is a handy tool for checking geographic
> ping perf between cities, and it shows a min of about 66ms for pings
> between Boston and San Diego
> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> 1-way transfer).
> >
> > Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> (through a pure fibre link of that distance) the propagation time is just
> over 20ms. If the network equipment between the Boston and San Diego is
> factored in, with some buffering along the way, 33ms does seem quite
> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
> >
> > -Ian Wheelock
> >
> > From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> David Lang <david@lang.hm>
> > Date: Friday 9 July 2021 at 23:59
> > To: "David P. Reed" <dpreed@deepplum.com>
> > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > Subject: Re: [Starlink] Starlink and bufferbloat status?
> >
> > IIRC, the definition of 'low latency' for the FCC was something like
> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> and worlds better than geostationary satellite (and many
> > External (mailto:david@lang.hm)
> >
> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>  https://www.inky.com/banner-faq/  https://www.inky.com
> >
> > IIRC, the definition of 'low latency' for the FCC was something like
> 100ms, and
> > Musk was predicting <40ms.
> >
> > roughly competitive with landlines, and worlds better than geostationary
> > satellite (and many wireless ISPs)
> >
> > but when doing any serious testing of latency, you need to be wired to
> the
> > router, wifi introduces so much variability that it swamps the signal.
> >
> > David Lang
> >
> > On Fri, 9 Jul 2021, David P. Reed wrote:
> >
> >> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> >> From: David P. Reed <dpreed@deepplum.com>
> >> To: starlink@lists.bufferbloat.net
> >> Subject: [Starlink] Starlink and bufferbloat status?
> >>
> >>
> >> Early measurements of performance of Starlink have shown significant
> bufferbloat, as Dave Taht has shown.
> >>
> >> But...  Starlink is a moving target. The bufferbloat isn't a hardware
> issue, it should be completely manageable, starting by simple firmware
> changes inside the Starlink system itself. For example, implementing
> fq_codel so that bottleneck links just drop packets according to the Best
> Practices RFC,
> >>
> >> So I'm hoping this has improved since Dave's measurements. How much has
> it improved? What's the current maximum packet latency under full
> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> times under full load*, but he wasn't using flent or some other measurement
> tool of good quality that gives a true number.
> >>
> >> 84 msec is not great - it's marginal for Zoom quality experience (you
> want latencies significantly less than 100 msec. as a rule of thumb for
> teleconferencing quality). But it is better than Dave's measurements showed.
> >>
> >> Now Musk bragged that his network was "low latency" unlike other high
> speed services, which means low end-to-end latency.  That got him
> permission from the FCC to operate Starlink at all. His number was, I
> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> probably meant just the time from the ground station to the terminal
> through the satellite. But I regularly get 17 msec. between California and
> Massachusetts over the public Internet)
> >>
> >> So 84 might be the current status. That would mean that someone at
> Srarlink might be paying some attention, but it is a long way from what
> Musk implied.
> >>
> >>
> >> PS: I forget the number of the RFC, but the number of packets queued on
> an egress link should be chosen by taking the hardware bottleneck
> throughput of any path, combined with an end-to-end Internet underlying
> delay of about 10 msec. to account for hops between source and destination.
> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> each terminal of buffering, total, in the path from terminal to public
> Internet, assuming the connection to the public Internet is not a problem.
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> >
> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> >
> >_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 10237 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:13         ` Nathan Owens
@ 2021-07-16 17:24           ` David Lang
  2021-07-16 17:29             ` Nathan Owens
  2021-07-16 20:51             ` Michael Richardson
  0 siblings, 2 replies; 37+ messages in thread
From: David Lang @ 2021-07-16 17:24 UTC (permalink / raw)
  To: Nathan Owens; +Cc: David Lang, Wheelock, Ian, starlink, David P. Reed

hey, it's a good attitude to have :-)

Elon tends to set 'impossible' goals, miss the timeline a bit, and come very 
close to the goal, if not exceed it.

As there are more staellites, the up down time will get closer to 4-5ms rather 
then the ~7ms you list, and with laser relays in orbit, and terminal to terminal 
routing in orbit, there is the potential for the theoretical minimum to tend 
lower, giving some headroom for other overhead but still being in the 20ms 
range.

David Lang

  On Fri, 16 Jul 2021, Nathan Owens wrote:

> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> if you do some basic math:
>
>   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>   - Total one-way delay: 4.3 - 11.1ms
>   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>
> This includes no transmission delay, queuing delay,
> processing/fragmentation/reassembly/etc, and no time-division multiplexing.
>
> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>
>> I think it depends on if you are looking at datacenter-to-datacenter
>> latency of
>> home to remote datacenter latency :-)
>>
>> my rule of thumb for cross US ping time has been 80-100ms latency (but
>> it's been
>> a few years since I tested it).
>>
>> I note that an article I saw today said that Elon is saying that latency
>> will
>> improve significantly in the near future, that up/down latency is ~20ms
>> and the
>> additional delays pushing it to the 80ms range are 'stupid packet routing'
>> problems that they are working on.
>>
>> If they are still in that level of optimization, it doesn't surprise me
>> that
>> they haven't really focused on the bufferbloat issue, they have more
>> obvious
>> stuff to fix first.
>>
>> David Lang
>>
>>
>>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>
>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>
>>> Hi David
>>> In terms of the Latency that David (Reed) mentioned for California to
>> Massachusetts of about 17ms over the public internet, it seems a bit faster
>> than what I would expect. My own traceroute via my VDSL link shows 14ms
>> just to get out of the operator network.
>>>
>>> https://www.wondernetwork.com  is a handy tool for checking geographic
>> ping perf between cities, and it shows a min of about 66ms for pings
>> between Boston and San Diego
>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>> 1-way transfer).
>>>
>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>> (through a pure fibre link of that distance) the propagation time is just
>> over 20ms. If the network equipment between the Boston and San Diego is
>> factored in, with some buffering along the way, 33ms does seem quite
>> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
>>>
>>> -Ian Wheelock
>>>
>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>> David Lang <david@lang.hm>
>>> Date: Friday 9 July 2021 at 23:59
>>> To: "David P. Reed" <dpreed@deepplum.com>
>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>
>>> IIRC, the definition of 'low latency' for the FCC was something like
>> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
>> and worlds better than geostationary satellite (and many
>>> External (mailto:david@lang.hm)
>>>
>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>  https://www.inky.com/banner-faq/  https://www.inky.com
>>>
>>> IIRC, the definition of 'low latency' for the FCC was something like
>> 100ms, and
>>> Musk was predicting <40ms.
>>>
>>> roughly competitive with landlines, and worlds better than geostationary
>>> satellite (and many wireless ISPs)
>>>
>>> but when doing any serious testing of latency, you need to be wired to
>> the
>>> router, wifi introduces so much variability that it swamps the signal.
>>>
>>> David Lang
>>>
>>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>
>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>> From: David P. Reed <dpreed@deepplum.com>
>>>> To: starlink@lists.bufferbloat.net
>>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>>
>>>>
>>>> Early measurements of performance of Starlink have shown significant
>> bufferbloat, as Dave Taht has shown.
>>>>
>>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>> issue, it should be completely manageable, starting by simple firmware
>> changes inside the Starlink system itself. For example, implementing
>> fq_codel so that bottleneck links just drop packets according to the Best
>> Practices RFC,
>>>>
>>>> So I'm hoping this has improved since Dave's measurements. How much has
>> it improved? What's the current maximum packet latency under full
>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
>> times under full load*, but he wasn't using flent or some other measurement
>> tool of good quality that gives a true number.
>>>>
>>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>> want latencies significantly less than 100 msec. as a rule of thumb for
>> teleconferencing quality). But it is better than Dave's measurements showed.
>>>>
>>>> Now Musk bragged that his network was "low latency" unlike other high
>> speed services, which means low end-to-end latency.  That got him
>> permission from the FCC to operate Starlink at all. His number was, I
>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
>> probably meant just the time from the ground station to the terminal
>> through the satellite. But I regularly get 17 msec. between California and
>> Massachusetts over the public Internet)
>>>>
>>>> So 84 might be the current status. That would mean that someone at
>> Srarlink might be paying some attention, but it is a long way from what
>> Musk implied.
>>>>
>>>>
>>>> PS: I forget the number of the RFC, but the number of packets queued on
>> an egress link should be chosen by taking the hardware bottleneck
>> throughput of any path, combined with an end-to-end Internet underlying
>> delay of about 10 msec. to account for hops between source and destination.
>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
>> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
>> each terminal of buffering, total, in the path from terminal to public
>> Internet, assuming the connection to the public Internet is not a problem.
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>>
>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>
>>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:24           ` David Lang
@ 2021-07-16 17:29             ` Nathan Owens
  2021-07-16 17:31               ` Mike Puchol
  2021-07-16 20:51             ` Michael Richardson
  1 sibling, 1 reply; 37+ messages in thread
From: Nathan Owens @ 2021-07-16 17:29 UTC (permalink / raw)
  To: David Lang; +Cc: Wheelock, Ian, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 8995 bytes --]

> As there are more satellites, the up down time will get closer to 4-5ms
rather then the ~7ms you list

Possibly, if you do steering to always jump to the lowest latency
satellite.

> with laser relays in orbit, and terminal to terminal routing in orbit,
there is the potential for the theoretical minimum to tend lower
Maybe for certain users really in the middle of nowhere, but I did the
best-case math for "bent pipe" in Seattle area, which is as good as it
gets.

On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:

> hey, it's a good attitude to have :-)
>
> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
> very
> close to the goal, if not exceed it.
>
> As there are more staellites, the up down time will get closer to 4-5ms
> rather
> then the ~7ms you list, and with laser relays in orbit, and terminal to
> terminal
> routing in orbit, there is the potential for the theoretical minimum to
> tend
> lower, giving some headroom for other overhead but still being in the 20ms
> range.
>
> David Lang
>
>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>
> > Elon said "foolish packet routing" for things over 20ms! Which seems
> crazy
> > if you do some basic math:
> >
> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> >   - Total one-way delay: 4.3 - 11.1ms
> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> >
> > This includes no transmission delay, queuing delay,
> > processing/fragmentation/reassembly/etc, and no time-division
> multiplexing.
> >
> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> >
> >> I think it depends on if you are looking at datacenter-to-datacenter
> >> latency of
> >> home to remote datacenter latency :-)
> >>
> >> my rule of thumb for cross US ping time has been 80-100ms latency (but
> >> it's been
> >> a few years since I tested it).
> >>
> >> I note that an article I saw today said that Elon is saying that latency
> >> will
> >> improve significantly in the near future, that up/down latency is ~20ms
> >> and the
> >> additional delays pushing it to the 80ms range are 'stupid packet
> routing'
> >> problems that they are working on.
> >>
> >> If they are still in that level of optimization, it doesn't surprise me
> >> that
> >> they haven't really focused on the bufferbloat issue, they have more
> >> obvious
> >> stuff to fix first.
> >>
> >> David Lang
> >>
> >>
> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> >>
> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
> >>>
> >>> Hi David
> >>> In terms of the Latency that David (Reed) mentioned for California to
> >> Massachusetts of about 17ms over the public internet, it seems a bit
> faster
> >> than what I would expect. My own traceroute via my VDSL link shows 14ms
> >> just to get out of the operator network.
> >>>
> >>> https://www.wondernetwork.com  is a handy tool for checking geographic
> >> ping perf between cities, and it shows a min of about 66ms for pings
> >> between Boston and San Diego
> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> >> 1-way transfer).
> >>>
> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> >> (through a pure fibre link of that distance) the propagation time is
> just
> >> over 20ms. If the network equipment between the Boston and San Diego is
> >> factored in, with some buffering along the way, 33ms does seem quite
> >> reasonable over the 20ms for speed of light in fibre for that 1-way
> transfer
> >>>
> >>> -Ian Wheelock
> >>>
> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> >> David Lang <david@lang.hm>
> >>> Date: Friday 9 July 2021 at 23:59
> >>> To: "David P. Reed" <dpreed@deepplum.com>
> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
> >>>
> >>> IIRC, the definition of 'low latency' for the FCC was something like
> >> 100ms, and Musk was predicting <40ms. roughly competitive with
> landlines,
> >> and worlds better than geostationary satellite (and many
> >>> External (mailto:david@lang.hm)
> >>>
> >>
> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
> >>  https://www.inky.com/banner-faq/  https://www.inky.com
> >>>
> >>> IIRC, the definition of 'low latency' for the FCC was something like
> >> 100ms, and
> >>> Musk was predicting <40ms.
> >>>
> >>> roughly competitive with landlines, and worlds better than
> geostationary
> >>> satellite (and many wireless ISPs)
> >>>
> >>> but when doing any serious testing of latency, you need to be wired to
> >> the
> >>> router, wifi introduces so much variability that it swamps the signal.
> >>>
> >>> David Lang
> >>>
> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
> >>>
> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> >>>> From: David P. Reed <dpreed@deepplum.com>
> >>>> To: starlink@lists.bufferbloat.net
> >>>> Subject: [Starlink] Starlink and bufferbloat status?
> >>>>
> >>>>
> >>>> Early measurements of performance of Starlink have shown significant
> >> bufferbloat, as Dave Taht has shown.
> >>>>
> >>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
> >> issue, it should be completely manageable, starting by simple firmware
> >> changes inside the Starlink system itself. For example, implementing
> >> fq_codel so that bottleneck links just drop packets according to the
> Best
> >> Practices RFC,
> >>>>
> >>>> So I'm hoping this has improved since Dave's measurements. How much
> has
> >> it improved? What's the current maximum packet latency under full
> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
> *ping
> >> times under full load*, but he wasn't using flent or some other
> measurement
> >> tool of good quality that gives a true number.
> >>>>
> >>>> 84 msec is not great - it's marginal for Zoom quality experience (you
> >> want latencies significantly less than 100 msec. as a rule of thumb for
> >> teleconferencing quality). But it is better than Dave's measurements
> showed.
> >>>>
> >>>> Now Musk bragged that his network was "low latency" unlike other high
> >> speed services, which means low end-to-end latency.  That got him
> >> permission from the FCC to operate Starlink at all. His number was, I
> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because
> he
> >> probably meant just the time from the ground station to the terminal
> >> through the satellite. But I regularly get 17 msec. between California
> and
> >> Massachusetts over the public Internet)
> >>>>
> >>>> So 84 might be the current status. That would mean that someone at
> >> Srarlink might be paying some attention, but it is a long way from what
> >> Musk implied.
> >>>>
> >>>>
> >>>> PS: I forget the number of the RFC, but the number of packets queued
> on
> >> an egress link should be chosen by taking the hardware bottleneck
> >> throughput of any path, combined with an end-to-end Internet underlying
> >> delay of about 10 msec. to account for hops between source and
> destination.
> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
> limited
> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets
> from
> >> each terminal of buffering, total, in the path from terminal to public
> >> Internet, assuming the connection to the public Internet is not a
> problem.
> >>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>>
> >>
> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> >>>
> >>> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >
>

[-- Attachment #2: Type: text/html, Size: 13078 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:29             ` Nathan Owens
@ 2021-07-16 17:31               ` Mike Puchol
  2021-07-16 17:35                 ` Nathan Owens
  2021-07-16 17:38                 ` David Lang
  0 siblings, 2 replies; 37+ messages in thread
From: Mike Puchol @ 2021-07-16 17:31 UTC (permalink / raw)
  To: David Lang, Nathan Owens; +Cc: starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 10296 bytes --]

Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.

We are being “sold” optical links for what they are not IMHO.

Best,

Mike
On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> > As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
>
> Possibly, if you do steering to always jump to the lowest latency satellite.
>
> > with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
> Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>
> > On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
> > > hey, it's a good attitude to have :-)
> > >
> > > Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
> > > close to the goal, if not exceed it.
> > >
> > > As there are more staellites, the up down time will get closer to 4-5ms rather
> > > then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
> > > routing in orbit, there is the potential for the theoretical minimum to tend
> > > lower, giving some headroom for other overhead but still being in the 20ms
> > > range.
> > >
> > > David Lang
> > >
> > >   On Fri, 16 Jul 2021, Nathan Owens wrote:
> > >
> > > > Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> > > > if you do some basic math:
> > > >
> > > >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> > > >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> > > >   - Total one-way delay: 4.3 - 11.1ms
> > > >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> > > >
> > > > This includes no transmission delay, queuing delay,
> > > > processing/fragmentation/reassembly/etc, and no time-division multiplexing.
> > > >
> > > > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> > > >
> > > >> I think it depends on if you are looking at datacenter-to-datacenter
> > > >> latency of
> > > >> home to remote datacenter latency :-)
> > > >>
> > > >> my rule of thumb for cross US ping time has been 80-100ms latency (but
> > > >> it's been
> > > >> a few years since I tested it).
> > > >>
> > > >> I note that an article I saw today said that Elon is saying that latency
> > > >> will
> > > >> improve significantly in the near future, that up/down latency is ~20ms
> > > >> and the
> > > >> additional delays pushing it to the 80ms range are 'stupid packet routing'
> > > >> problems that they are working on.
> > > >>
> > > >> If they are still in that level of optimization, it doesn't surprise me
> > > >> that
> > > >> they haven't really focused on the bufferbloat issue, they have more
> > > >> obvious
> > > >> stuff to fix first.
> > > >>
> > > >> David Lang
> > > >>
> > > >>
> > > >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> > > >>
> > > >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
> > > >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> > > >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> > > >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > >>>
> > > >>> Hi David
> > > >>> In terms of the Latency that David (Reed) mentioned for California to
> > > >> Massachusetts of about 17ms over the public internet, it seems a bit faster
> > > >> than what I would expect. My own traceroute via my VDSL link shows 14ms
> > > >> just to get out of the operator network.
> > > >>>
> > > >>> https://www.wondernetwork.com  is a handy tool for checking geographic
> > > >> ping perf between cities, and it shows a min of about 66ms for pings
> > > >> between Boston and San Diego
> > > >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> > > >> 1-way transfer).
> > > >>>
> > > >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> > > >> (through a pure fibre link of that distance) the propagation time is just
> > > >> over 20ms. If the network equipment between the Boston and San Diego is
> > > >> factored in, with some buffering along the way, 33ms does seem quite
> > > >> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
> > > >>>
> > > >>> -Ian Wheelock
> > > >>>
> > > >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> > > >> David Lang <david@lang.hm>
> > > >>> Date: Friday 9 July 2021 at 23:59
> > > >>> To: "David P. Reed" <dpreed@deepplum.com>
> > > >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > >>>
> > > >>> IIRC, the definition of 'low latency' for the FCC was something like
> > > >> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> > > >> and worlds better than geostationary satellite (and many
> > > >>> External (mailto:david@lang.hm)
> > > >>>
> > > >> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
> > > >>  https://www.inky.com/banner-faq/  https://www.inky.com
> > > >>>
> > > >>> IIRC, the definition of 'low latency' for the FCC was something like
> > > >> 100ms, and
> > > >>> Musk was predicting <40ms.
> > > >>>
> > > >>> roughly competitive with landlines, and worlds better than geostationary
> > > >>> satellite (and many wireless ISPs)
> > > >>>
> > > >>> but when doing any serious testing of latency, you need to be wired to
> > > >> the
> > > >>> router, wifi introduces so much variability that it swamps the signal.
> > > >>>
> > > >>> David Lang
> > > >>>
> > > >>> On Fri, 9 Jul 2021, David P. Reed wrote:
> > > >>>
> > > >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> > > >>>> From: David P. Reed <dpreed@deepplum.com>
> > > >>>> To: starlink@lists.bufferbloat.net
> > > >>>> Subject: [Starlink] Starlink and bufferbloat status?
> > > >>>>
> > > >>>>
> > > >>>> Early measurements of performance of Starlink have shown significant
> > > >> bufferbloat, as Dave Taht has shown.
> > > >>>>
> > > >>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
> > > >> issue, it should be completely manageable, starting by simple firmware
> > > >> changes inside the Starlink system itself. For example, implementing
> > > >> fq_codel so that bottleneck links just drop packets according to the Best
> > > >> Practices RFC,
> > > >>>>
> > > >>>> So I'm hoping this has improved since Dave's measurements. How much has
> > > >> it improved? What's the current maximum packet latency under full
> > > >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> > > >> times under full load*, but he wasn't using flent or some other measurement
> > > >> tool of good quality that gives a true number.
> > > >>>>
> > > >>>> 84 msec is not great - it's marginal for Zoom quality experience (you
> > > >> want latencies significantly less than 100 msec. as a rule of thumb for
> > > >> teleconferencing quality). But it is better than Dave's measurements showed.
> > > >>>>
> > > >>>> Now Musk bragged that his network was "low latency" unlike other high
> > > >> speed services, which means low end-to-end latency.  That got him
> > > >> permission from the FCC to operate Starlink at all. His number was, I
> > > >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> > > >> probably meant just the time from the ground station to the terminal
> > > >> through the satellite. But I regularly get 17 msec. between California and
> > > >> Massachusetts over the public Internet)
> > > >>>>
> > > >>>> So 84 might be the current status. That would mean that someone at
> > > >> Srarlink might be paying some attention, but it is a long way from what
> > > >> Musk implied.
> > > >>>>
> > > >>>>
> > > >>>> PS: I forget the number of the RFC, but the number of packets queued on
> > > >> an egress link should be chosen by taking the hardware bottleneck
> > > >> throughput of any path, combined with an end-to-end Internet underlying
> > > >> delay of about 10 msec. to account for hops between source and destination.
> > > >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> > > >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> > > >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> > > >> each terminal of buffering, total, in the path from terminal to public
> > > >> Internet, assuming the connection to the public Internet is not a problem.
> > > >>> _______________________________________________
> > > >>> Starlink mailing list
> > > >>> Starlink@lists.bufferbloat.net
> > > >>>
> > > >> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> > > >>>
> > > >>> _______________________________________________
> > > >> Starlink mailing list
> > > >> Starlink@lists.bufferbloat.net
> > > >> https://lists.bufferbloat.net/listinfo/starlink
> > > >>
> > > >
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 14556 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:31               ` Mike Puchol
@ 2021-07-16 17:35                 ` Nathan Owens
  2021-07-16 17:39                   ` Jonathan Bennett
  2021-07-17 18:36                   ` David P. Reed
  2021-07-16 17:38                 ` David Lang
  1 sibling, 2 replies; 37+ messages in thread
From: Nathan Owens @ 2021-07-16 17:35 UTC (permalink / raw)
  To: Mike Puchol; +Cc: David Lang, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 10226 bytes --]

The other case where they could provide benefit is very long distance paths
--- NY to Tokyo, Johannesburg to London, etc... but presumably at high
cost, as the capacity will likely be much lower than submarine cables.

On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:

> Satellite optical links are useful to extend coverage to areas where you
> don’t have gateways - thus, they will introduce additional latency compared
> to two space segment hops (terminal to satellite -> satellite to gateway).
> If you have terminal to satellite, two optical hops, then final satellite
> to gateway, you will have more latency, not less.
>
> We are being “sold” optical links for what they are not IMHO.
>
> Best,
>
> Mike
> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>
> > As there are more satellites, the up down time will get closer to 4-5ms
> rather then the ~7ms you list
>
> Possibly, if you do steering to always jump to the lowest latency
> satellite.
>
> > with laser relays in orbit, and terminal to terminal routing in orbit,
> there is the potential for the theoretical minimum to tend lower
> Maybe for certain users really in the middle of nowhere, but I did the
> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>
> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>
>> hey, it's a good attitude to have :-)
>>
>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>> very
>> close to the goal, if not exceed it.
>>
>> As there are more staellites, the up down time will get closer to 4-5ms
>> rather
>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>> terminal
>> routing in orbit, there is the potential for the theoretical minimum to
>> tend
>> lower, giving some headroom for other overhead but still being in the 20ms
>> range.
>>
>> David Lang
>>
>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>
>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>> crazy
>> > if you do some basic math:
>> >
>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>> >   - Total one-way delay: 4.3 - 11.1ms
>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>> >
>> > This includes no transmission delay, queuing delay,
>> > processing/fragmentation/reassembly/etc, and no time-division
>> multiplexing.
>> >
>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>> >
>> >> I think it depends on if you are looking at datacenter-to-datacenter
>> >> latency of
>> >> home to remote datacenter latency :-)
>> >>
>> >> my rule of thumb for cross US ping time has been 80-100ms latency (but
>> >> it's been
>> >> a few years since I tested it).
>> >>
>> >> I note that an article I saw today said that Elon is saying that
>> latency
>> >> will
>> >> improve significantly in the near future, that up/down latency is ~20ms
>> >> and the
>> >> additional delays pushing it to the 80ms range are 'stupid packet
>> routing'
>> >> problems that they are working on.
>> >>
>> >> If they are still in that level of optimization, it doesn't surprise me
>> >> that
>> >> they haven't really focused on the bufferbloat issue, they have more
>> >> obvious
>> >> stuff to fix first.
>> >>
>> >> David Lang
>> >>
>> >>
>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>> >>
>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>> >>>
>> >>> Hi David
>> >>> In terms of the Latency that David (Reed) mentioned for California to
>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>> faster
>> >> than what I would expect. My own traceroute via my VDSL link shows 14ms
>> >> just to get out of the operator network.
>> >>>
>> >>> https://www.wondernetwork.com  is a handy tool for checking
>> geographic
>> >> ping perf between cities, and it shows a min of about 66ms for pings
>> >> between Boston and San Diego
>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>> >> 1-way transfer).
>> >>>
>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>> light
>> >> (through a pure fibre link of that distance) the propagation time is
>> just
>> >> over 20ms. If the network equipment between the Boston and San Diego is
>> >> factored in, with some buffering along the way, 33ms does seem quite
>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>> transfer
>> >>>
>> >>> -Ian Wheelock
>> >>>
>> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>> >> David Lang <david@lang.hm>
>> >>> Date: Friday 9 July 2021 at 23:59
>> >>> To: "David P. Reed" <dpreed@deepplum.com>
>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>> >>>
>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>> landlines,
>> >> and worlds better than geostationary satellite (and many
>> >>> External (mailto:david@lang.hm)
>> >>>
>> >>
>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>> >>>
>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>> >> 100ms, and
>> >>> Musk was predicting <40ms.
>> >>>
>> >>> roughly competitive with landlines, and worlds better than
>> geostationary
>> >>> satellite (and many wireless ISPs)
>> >>>
>> >>> but when doing any serious testing of latency, you need to be wired to
>> >> the
>> >>> router, wifi introduces so much variability that it swamps the signal.
>> >>>
>> >>> David Lang
>> >>>
>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>> >>>
>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>> >>>> From: David P. Reed <dpreed@deepplum.com>
>> >>>> To: starlink@lists.bufferbloat.net
>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>> >>>>
>> >>>>
>> >>>> Early measurements of performance of Starlink have shown significant
>> >> bufferbloat, as Dave Taht has shown.
>> >>>>
>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>> >> issue, it should be completely manageable, starting by simple firmware
>> >> changes inside the Starlink system itself. For example, implementing
>> >> fq_codel so that bottleneck links just drop packets according to the
>> Best
>> >> Practices RFC,
>> >>>>
>> >>>> So I'm hoping this has improved since Dave's measurements. How much
>> has
>> >> it improved? What's the current maximum packet latency under full
>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>> *ping
>> >> times under full load*, but he wasn't using flent or some other
>> measurement
>> >> tool of good quality that gives a true number.
>> >>>>
>> >>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>> >> want latencies significantly less than 100 msec. as a rule of thumb for
>> >> teleconferencing quality). But it is better than Dave's measurements
>> showed.
>> >>>>
>> >>>> Now Musk bragged that his network was "low latency" unlike other high
>> >> speed services, which means low end-to-end latency.  That got him
>> >> permission from the FCC to operate Starlink at all. His number was, I
>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because
>> he
>> >> probably meant just the time from the ground station to the terminal
>> >> through the satellite. But I regularly get 17 msec. between California
>> and
>> >> Massachusetts over the public Internet)
>> >>>>
>> >>>> So 84 might be the current status. That would mean that someone at
>> >> Srarlink might be paying some attention, but it is a long way from what
>> >> Musk implied.
>> >>>>
>> >>>>
>> >>>> PS: I forget the number of the RFC, but the number of packets queued
>> on
>> >> an egress link should be chosen by taking the hardware bottleneck
>> >> throughput of any path, combined with an end-to-end Internet underlying
>> >> delay of about 10 msec. to account for hops between source and
>> destination.
>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>> limited
>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets
>> from
>> >> each terminal of buffering, total, in the path from terminal to public
>> >> Internet, assuming the connection to the public Internet is not a
>> problem.
>> >>> _______________________________________________
>> >>> Starlink mailing list
>> >>> Starlink@lists.bufferbloat.net
>> >>>
>> >>
>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>> >>>
>> >>> _______________________________________________
>> >> Starlink mailing list
>> >> Starlink@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/starlink
>> >>
>> >
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>

[-- Attachment #2: Type: text/html, Size: 14876 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:31               ` Mike Puchol
  2021-07-16 17:35                 ` Nathan Owens
@ 2021-07-16 17:38                 ` David Lang
  2021-07-16 17:42                   ` Mike Puchol
  1 sibling, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-16 17:38 UTC (permalink / raw)
  To: Mike Puchol; +Cc: David Lang, Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 10942 bytes --]

the speed of light in a vaccum is significantly better than the speed of light 
in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat 
-> ground station (especially if the ground station is in the target datacenter) 
can be faster than terminal -> sat -> ground station -> cross-country fiber, 
even accounting for the longer distance at 550km altitude than at ground level.

This has interesting implications for supplementing/replacing undersea cables as 
the sats over the ocean are not going to be heavily used, dedicated ground 
stations could be setup that use sats further offshore than normal (and are 
shielded from sats over land) to leverage the system without interfering 
significantly with more 'traditional' uses

David Lang

  On Fri, 16 Jul 2021, Mike Puchol wrote:

> Date: Fri, 16 Jul 2021 19:31:37 +0200
> From: Mike Puchol <mike@starlink.sx>
> To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>     David P. Reed <dpreed@deepplum.com>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
> 
> Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
>
> We are being “sold” optical links for what they are not IMHO.
>
> Best,
>
> Mike
> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>> As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
>>
>> Possibly, if you do steering to always jump to the lowest latency satellite.
>>
>>> with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
>> Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>
>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>> hey, it's a good attitude to have :-)
>>>>
>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
>>>> close to the goal, if not exceed it.
>>>>
>>>> As there are more staellites, the up down time will get closer to 4-5ms rather
>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
>>>> routing in orbit, there is the potential for the theoretical minimum to tend
>>>> lower, giving some headroom for other overhead but still being in the 20ms
>>>> range.
>>>>
>>>> David Lang
>>>>
>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>
>>>>> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
>>>>> if you do some basic math:
>>>>>
>>>>>    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>>>    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>>>    - Total one-way delay: 4.3 - 11.1ms
>>>>>    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>>>
>>>>> This includes no transmission delay, queuing delay,
>>>>> processing/fragmentation/reassembly/etc, and no time-division multiplexing.
>>>>>
>>>>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>>>
>>>>>> I think it depends on if you are looking at datacenter-to-datacenter
>>>>>> latency of
>>>>>> home to remote datacenter latency :-)
>>>>>>
>>>>>> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>>>>> it's been
>>>>>> a few years since I tested it).
>>>>>>
>>>>>> I note that an article I saw today said that Elon is saying that latency
>>>>>> will
>>>>>> improve significantly in the near future, that up/down latency is ~20ms
>>>>>> and the
>>>>>> additional delays pushing it to the 80ms range are 'stupid packet routing'
>>>>>> problems that they are working on.
>>>>>>
>>>>>> If they are still in that level of optimization, it doesn't surprise me
>>>>>> that
>>>>>> they haven't really focused on the bufferbloat issue, they have more
>>>>>> obvious
>>>>>> stuff to fix first.
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>>
>>>>>>    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>>>>
>>>>>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>>>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>>>>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>
>>>>>>> Hi David
>>>>>>> In terms of the Latency that David (Reed) mentioned for California to
>>>>>> Massachusetts of about 17ms over the public internet, it seems a bit faster
>>>>>> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>>>>> just to get out of the operator network.
>>>>>>>
>>>>>>> https://www.wondernetwork.com  is a handy tool for checking geographic
>>>>>> ping perf between cities, and it shows a min of about 66ms for pings
>>>>>> between Boston and San Diego
>>>>>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>>>>> 1-way transfer).
>>>>>>>
>>>>>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>>>>>> (through a pure fibre link of that distance) the propagation time is just
>>>>>> over 20ms. If the network equipment between the Boston and San Diego is
>>>>>> factored in, with some buffering along the way, 33ms does seem quite
>>>>>> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
>>>>>>>
>>>>>>> -Ian Wheelock
>>>>>>>
>>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>>>> David Lang <david@lang.hm>
>>>>>>> Date: Friday 9 July 2021 at 23:59
>>>>>>> To: "David P. Reed" <dpreed@deepplum.com>
>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>
>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
>>>>>> and worlds better than geostationary satellite (and many
>>>>>>> External (mailto:david@lang.hm)
>>>>>>>
>>>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>>>>   https://www.inky.com/banner-faq/  https://www.inky.com
>>>>>>>
>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>> 100ms, and
>>>>>>> Musk was predicting <40ms.
>>>>>>>
>>>>>>> roughly competitive with landlines, and worlds better than geostationary
>>>>>>> satellite (and many wireless ISPs)
>>>>>>>
>>>>>>> but when doing any serious testing of latency, you need to be wired to
>>>>>> the
>>>>>>> router, wifi introduces so much variability that it swamps the signal.
>>>>>>>
>>>>>>> David Lang
>>>>>>>
>>>>>>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>>>>>
>>>>>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>>>>>> From: David P. Reed <dpreed@deepplum.com>
>>>>>>>> To: starlink@lists.bufferbloat.net
>>>>>>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>>>>>>
>>>>>>>>
>>>>>>>> Early measurements of performance of Starlink have shown significant
>>>>>> bufferbloat, as Dave Taht has shown.
>>>>>>>>
>>>>>>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>>>>> issue, it should be completely manageable, starting by simple firmware
>>>>>> changes inside the Starlink system itself. For example, implementing
>>>>>> fq_codel so that bottleneck links just drop packets according to the Best
>>>>>> Practices RFC,
>>>>>>>>
>>>>>>>> So I'm hoping this has improved since Dave's measurements. How much has
>>>>>> it improved? What's the current maximum packet latency under full
>>>>>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
>>>>>> times under full load*, but he wasn't using flent or some other measurement
>>>>>> tool of good quality that gives a true number.
>>>>>>>>
>>>>>>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>>>>> want latencies significantly less than 100 msec. as a rule of thumb for
>>>>>> teleconferencing quality). But it is better than Dave's measurements showed.
>>>>>>>>
>>>>>>>> Now Musk bragged that his network was "low latency" unlike other high
>>>>>> speed services, which means low end-to-end latency.  That got him
>>>>>> permission from the FCC to operate Starlink at all. His number was, I
>>>>>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
>>>>>> probably meant just the time from the ground station to the terminal
>>>>>> through the satellite. But I regularly get 17 msec. between California and
>>>>>> Massachusetts over the public Internet)
>>>>>>>>
>>>>>>>> So 84 might be the current status. That would mean that someone at
>>>>>> Srarlink might be paying some attention, but it is a long way from what
>>>>>> Musk implied.
>>>>>>>>
>>>>>>>>
>>>>>>>> PS: I forget the number of the RFC, but the number of packets queued on
>>>>>> an egress link should be chosen by taking the hardware bottleneck
>>>>>> throughput of any path, combined with an end-to-end Internet underlying
>>>>>> delay of about 10 msec. to account for hops between source and destination.
>>>>>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
>>>>>> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>>>>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
>>>>>> each terminal of buffering, total, in the path from terminal to public
>>>>>> Internet, assuming the connection to the public Internet is not a problem.
>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>
>>>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:35                 ` Nathan Owens
@ 2021-07-16 17:39                   ` Jonathan Bennett
  2021-07-19  1:05                     ` Nick Buraglio
  2021-07-17 18:36                   ` David P. Reed
  1 sibling, 1 reply; 37+ messages in thread
From: Jonathan Bennett @ 2021-07-16 17:39 UTC (permalink / raw)
  To: Nathan Owens; +Cc: Mike Puchol, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 10962 bytes --]

On Fri, Jul 16, 2021, 12:35 PM Nathan Owens <nathan@nathan.io> wrote:

> The other case where they could provide benefit is very long distance
> paths --- NY to Tokyo, Johannesburg to London, etc... but presumably at
> high cost, as the capacity will likely be much lower than submarine cables.
>
>>
Or traffic between Starlink customers. A video call between me and someone
else on the Starlink network is going to be drastically better if it can
route over the sats.

>
>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:
>
>> Satellite optical links are useful to extend coverage to areas where you
>> don’t have gateways - thus, they will introduce additional latency compared
>> to two space segment hops (terminal to satellite -> satellite to gateway).
>> If you have terminal to satellite, two optical hops, then final satellite
>> to gateway, you will have more latency, not less.
>>
>> We are being “sold” optical links for what they are not IMHO.
>>
>> Best,
>>
>> Mike
>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>
>> > As there are more satellites, the up down time will get closer to 4-5ms
>> rather then the ~7ms you list
>>
>> Possibly, if you do steering to always jump to the lowest latency
>> satellite.
>>
>> > with laser relays in orbit, and terminal to terminal routing in orbit,
>> there is the potential for the theoretical minimum to tend lower
>> Maybe for certain users really in the middle of nowhere, but I did the
>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>
>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>
>>> hey, it's a good attitude to have :-)
>>>
>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>> very
>>> close to the goal, if not exceed it.
>>>
>>> As there are more staellites, the up down time will get closer to 4-5ms
>>> rather
>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>> terminal
>>> routing in orbit, there is the potential for the theoretical minimum to
>>> tend
>>> lower, giving some headroom for other overhead but still being in the
>>> 20ms
>>> range.
>>>
>>> David Lang
>>>
>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>
>>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>>> crazy
>>> > if you do some basic math:
>>> >
>>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>> >   - Total one-way delay: 4.3 - 11.1ms
>>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>> >
>>> > This includes no transmission delay, queuing delay,
>>> > processing/fragmentation/reassembly/etc, and no time-division
>>> multiplexing.
>>> >
>>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>> >
>>> >> I think it depends on if you are looking at datacenter-to-datacenter
>>> >> latency of
>>> >> home to remote datacenter latency :-)
>>> >>
>>> >> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>> >> it's been
>>> >> a few years since I tested it).
>>> >>
>>> >> I note that an article I saw today said that Elon is saying that
>>> latency
>>> >> will
>>> >> improve significantly in the near future, that up/down latency is
>>> ~20ms
>>> >> and the
>>> >> additional delays pushing it to the 80ms range are 'stupid packet
>>> routing'
>>> >> problems that they are working on.
>>> >>
>>> >> If they are still in that level of optimization, it doesn't surprise
>>> me
>>> >> that
>>> >> they haven't really focused on the bufferbloat issue, they have more
>>> >> obvious
>>> >> stuff to fix first.
>>> >>
>>> >> David Lang
>>> >>
>>> >>
>>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>> >>
>>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net
>>> >
>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>> >>>
>>> >>> Hi David
>>> >>> In terms of the Latency that David (Reed) mentioned for California to
>>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>>> faster
>>> >> than what I would expect. My own traceroute via my VDSL link shows
>>> 14ms
>>> >> just to get out of the operator network.
>>> >>>
>>> >>> https://www.wondernetwork.com  is a handy tool for checking
>>> geographic
>>> >> ping perf between cities, and it shows a min of about 66ms for pings
>>> >> between Boston and San Diego
>>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>> >> 1-way transfer).
>>> >>>
>>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>> light
>>> >> (through a pure fibre link of that distance) the propagation time is
>>> just
>>> >> over 20ms. If the network equipment between the Boston and San Diego
>>> is
>>> >> factored in, with some buffering along the way, 33ms does seem quite
>>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>>> transfer
>>> >>>
>>> >>> -Ian Wheelock
>>> >>>
>>> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>> >> David Lang <david@lang.hm>
>>> >>> Date: Friday 9 July 2021 at 23:59
>>> >>> To: "David P. Reed" <dpreed@deepplum.com>
>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net
>>> >
>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>> >>>
>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>>> landlines,
>>> >> and worlds better than geostationary satellite (and many
>>> >>> External (mailto:david@lang.hm)
>>> >>>
>>> >>
>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>>> >>>
>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>> >> 100ms, and
>>> >>> Musk was predicting <40ms.
>>> >>>
>>> >>> roughly competitive with landlines, and worlds better than
>>> geostationary
>>> >>> satellite (and many wireless ISPs)
>>> >>>
>>> >>> but when doing any serious testing of latency, you need to be wired
>>> to
>>> >> the
>>> >>> router, wifi introduces so much variability that it swamps the
>>> signal.
>>> >>>
>>> >>> David Lang
>>> >>>
>>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>> >>>
>>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>> >>>> From: David P. Reed <dpreed@deepplum.com>
>>> >>>> To: starlink@lists.bufferbloat.net
>>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>>> >>>>
>>> >>>>
>>> >>>> Early measurements of performance of Starlink have shown significant
>>> >> bufferbloat, as Dave Taht has shown.
>>> >>>>
>>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a
>>> hardware
>>> >> issue, it should be completely manageable, starting by simple firmware
>>> >> changes inside the Starlink system itself. For example, implementing
>>> >> fq_codel so that bottleneck links just drop packets according to the
>>> Best
>>> >> Practices RFC,
>>> >>>>
>>> >>>> So I'm hoping this has improved since Dave's measurements. How much
>>> has
>>> >> it improved? What's the current maximum packet latency under full
>>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>> *ping
>>> >> times under full load*, but he wasn't using flent or some other
>>> measurement
>>> >> tool of good quality that gives a true number.
>>> >>>>
>>> >>>> 84 msec is not great - it's marginal for Zoom quality experience
>>> (you
>>> >> want latencies significantly less than 100 msec. as a rule of thumb
>>> for
>>> >> teleconferencing quality). But it is better than Dave's measurements
>>> showed.
>>> >>>>
>>> >>>> Now Musk bragged that his network was "low latency" unlike other
>>> high
>>> >> speed services, which means low end-to-end latency.  That got him
>>> >> permission from the FCC to operate Starlink at all. His number was, I
>>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5,
>>> because he
>>> >> probably meant just the time from the ground station to the terminal
>>> >> through the satellite. But I regularly get 17 msec. between
>>> California and
>>> >> Massachusetts over the public Internet)
>>> >>>>
>>> >>>> So 84 might be the current status. That would mean that someone at
>>> >> Srarlink might be paying some attention, but it is a long way from
>>> what
>>> >> Musk implied.
>>> >>>>
>>> >>>>
>>> >>>> PS: I forget the number of the RFC, but the number of packets
>>> queued on
>>> >> an egress link should be chosen by taking the hardware bottleneck
>>> >> throughput of any path, combined with an end-to-end Internet
>>> underlying
>>> >> delay of about 10 msec. to account for hops between source and
>>> destination.
>>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>> limited
>>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250
>>> packets from
>>> >> each terminal of buffering, total, in the path from terminal to public
>>> >> Internet, assuming the connection to the public Internet is not a
>>> problem.
>>> >>> _______________________________________________
>>> >>> Starlink mailing list
>>> >>> Starlink@lists.bufferbloat.net
>>> >>>
>>> >>
>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>> >>>
>>> >>> _______________________________________________
>>> >> Starlink mailing list
>>> >> Starlink@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>> >>
>>> >
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 16646 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:38                 ` David Lang
@ 2021-07-16 17:42                   ` Mike Puchol
  2021-07-16 18:48                     ` David Lang
  0 siblings, 1 reply; 37+ messages in thread
From: Mike Puchol @ 2021-07-16 17:42 UTC (permalink / raw)
  To: David Lang; +Cc: Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 13135 bytes --]

True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.

Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.

Best,

Mike
On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
> the speed of light in a vaccum is significantly better than the speed of light
> in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
> -> ground station (especially if the ground station is in the target datacenter)
> can be faster than terminal -> sat -> ground station -> cross-country fiber,
> even accounting for the longer distance at 550km altitude than at ground level.
>
> This has interesting implications for supplementing/replacing undersea cables as
> the sats over the ocean are not going to be heavily used, dedicated ground
> stations could be setup that use sats further offshore than normal (and are
> shielded from sats over land) to leverage the system without interfering
> significantly with more 'traditional' uses
>
> David Lang
>
> On Fri, 16 Jul 2021, Mike Puchol wrote:
>
> > Date: Fri, 16 Jul 2021 19:31:37 +0200
> > From: Mike Puchol <mike@starlink.sx>
> > To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
> > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > David P. Reed <dpreed@deepplum.com>
> > Subject: Re: [Starlink] Starlink and bufferbloat status?
> >
> > Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
> >
> > We are being “sold” optical links for what they are not IMHO.
> >
> > Best,
> >
> > Mike
> > On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> > > > As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
> > >
> > > Possibly, if you do steering to always jump to the lowest latency satellite.
> > >
> > > > with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
> > > Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
> > >
> > > > On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
> > > > > hey, it's a good attitude to have :-)
> > > > >
> > > > > Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
> > > > > close to the goal, if not exceed it.
> > > > >
> > > > > As there are more staellites, the up down time will get closer to 4-5ms rather
> > > > > then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
> > > > > routing in orbit, there is the potential for the theoretical minimum to tend
> > > > > lower, giving some headroom for other overhead but still being in the 20ms
> > > > > range.
> > > > >
> > > > > David Lang
> > > > >
> > > > >   On Fri, 16 Jul 2021, Nathan Owens wrote:
> > > > >
> > > > > > Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> > > > > > if you do some basic math:
> > > > > >
> > > > > >    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > >    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > >    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> > > > > >    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> > > > > >    - Total one-way delay: 4.3 - 11.1ms
> > > > > >    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> > > > > >
> > > > > > This includes no transmission delay, queuing delay,
> > > > > > processing/fragmentation/reassembly/etc, and no time-division multiplexing.
> > > > > >
> > > > > > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> > > > > >
> > > > > > > I think it depends on if you are looking at datacenter-to-datacenter
> > > > > > > latency of
> > > > > > > home to remote datacenter latency :-)
> > > > > > >
> > > > > > > my rule of thumb for cross US ping time has been 80-100ms latency (but
> > > > > > > it's been
> > > > > > > a few years since I tested it).
> > > > > > >
> > > > > > > I note that an article I saw today said that Elon is saying that latency
> > > > > > > will
> > > > > > > improve significantly in the near future, that up/down latency is ~20ms
> > > > > > > and the
> > > > > > > additional delays pushing it to the 80ms range are 'stupid packet routing'
> > > > > > > problems that they are working on.
> > > > > > >
> > > > > > > If they are still in that level of optimization, it doesn't surprise me
> > > > > > > that
> > > > > > > they haven't really focused on the bufferbloat issue, they have more
> > > > > > > obvious
> > > > > > > stuff to fix first.
> > > > > > >
> > > > > > > David Lang
> > > > > > >
> > > > > > >
> > > > > > >    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> > > > > > >
> > > > > > > > Date: Fri, 16 Jul 2021 10:21:52 +0000
> > > > > > > > From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> > > > > > > > To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > >
> > > > > > > > Hi David
> > > > > > > > In terms of the Latency that David (Reed) mentioned for California to
> > > > > > > Massachusetts of about 17ms over the public internet, it seems a bit faster
> > > > > > > than what I would expect. My own traceroute via my VDSL link shows 14ms
> > > > > > > just to get out of the operator network.
> > > > > > > >
> > > > > > > > https://www.wondernetwork.com  is a handy tool for checking geographic
> > > > > > > ping perf between cities, and it shows a min of about 66ms for pings
> > > > > > > between Boston and San Diego
> > > > > > > https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> > > > > > > 1-way transfer).
> > > > > > > >
> > > > > > > > Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> > > > > > > (through a pure fibre link of that distance) the propagation time is just
> > > > > > > over 20ms. If the network equipment between the Boston and San Diego is
> > > > > > > factored in, with some buffering along the way, 33ms does seem quite
> > > > > > > reasonable over the 20ms for speed of light in fibre for that 1-way transfer
> > > > > > > >
> > > > > > > > -Ian Wheelock
> > > > > > > >
> > > > > > > > From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> > > > > > > David Lang <david@lang.hm>
> > > > > > > > Date: Friday 9 July 2021 at 23:59
> > > > > > > > To: "David P. Reed" <dpreed@deepplum.com>
> > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > >
> > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> > > > > > > and worlds better than geostationary satellite (and many
> > > > > > > > External (mailto:david@lang.hm)
> > > > > > > >
> > > > > > > https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
> > > > > > >   https://www.inky.com/banner-faq/  https://www.inky.com
> > > > > > > >
> > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > 100ms, and
> > > > > > > > Musk was predicting <40ms.
> > > > > > > >
> > > > > > > > roughly competitive with landlines, and worlds better than geostationary
> > > > > > > > satellite (and many wireless ISPs)
> > > > > > > >
> > > > > > > > but when doing any serious testing of latency, you need to be wired to
> > > > > > > the
> > > > > > > > router, wifi introduces so much variability that it swamps the signal.
> > > > > > > >
> > > > > > > > David Lang
> > > > > > > >
> > > > > > > > On Fri, 9 Jul 2021, David P. Reed wrote:
> > > > > > > >
> > > > > > > > > Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> > > > > > > > > From: David P. Reed <dpreed@deepplum.com>
> > > > > > > > > To: starlink@lists.bufferbloat.net
> > > > > > > > > Subject: [Starlink] Starlink and bufferbloat status?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Early measurements of performance of Starlink have shown significant
> > > > > > > bufferbloat, as Dave Taht has shown.
> > > > > > > > >
> > > > > > > > > But...  Starlink is a moving target. The bufferbloat isn't a hardware
> > > > > > > issue, it should be completely manageable, starting by simple firmware
> > > > > > > changes inside the Starlink system itself. For example, implementing
> > > > > > > fq_codel so that bottleneck links just drop packets according to the Best
> > > > > > > Practices RFC,
> > > > > > > > >
> > > > > > > > > So I'm hoping this has improved since Dave's measurements. How much has
> > > > > > > it improved? What's the current maximum packet latency under full
> > > > > > > load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> > > > > > > times under full load*, but he wasn't using flent or some other measurement
> > > > > > > tool of good quality that gives a true number.
> > > > > > > > >
> > > > > > > > > 84 msec is not great - it's marginal for Zoom quality experience (you
> > > > > > > want latencies significantly less than 100 msec. as a rule of thumb for
> > > > > > > teleconferencing quality). But it is better than Dave's measurements showed.
> > > > > > > > >
> > > > > > > > > Now Musk bragged that his network was "low latency" unlike other high
> > > > > > > speed services, which means low end-to-end latency.  That got him
> > > > > > > permission from the FCC to operate Starlink at all. His number was, I
> > > > > > > think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> > > > > > > probably meant just the time from the ground station to the terminal
> > > > > > > through the satellite. But I regularly get 17 msec. between California and
> > > > > > > Massachusetts over the public Internet)
> > > > > > > > >
> > > > > > > > > So 84 might be the current status. That would mean that someone at
> > > > > > > Srarlink might be paying some attention, but it is a long way from what
> > > > > > > Musk implied.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > PS: I forget the number of the RFC, but the number of packets queued on
> > > > > > > an egress link should be chosen by taking the hardware bottleneck
> > > > > > > throughput of any path, combined with an end-to-end Internet underlying
> > > > > > > delay of about 10 msec. to account for hops between source and destination.
> > > > > > > Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> > > > > > > to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> > > > > > > about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> > > > > > > each terminal of buffering, total, in the path from terminal to public
> > > > > > > Internet, assuming the connection to the public Internet is not a problem.
> > > > > > > > _______________________________________________
> > > > > > > > Starlink mailing list
> > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > >
> > > > > > > https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> > > > > > > >
> > > > > > > > _______________________________________________
> > > > > > > Starlink mailing list
> > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > https://lists.bufferbloat.net/listinfo/starlink
> > > > > > >
> > > > > >
> > > _______________________________________________
> > > Starlink mailing list
> > > Starlink@lists.bufferbloat.net
> > > https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 13751 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:42                   ` Mike Puchol
@ 2021-07-16 18:48                     ` David Lang
  2021-07-16 20:57                       ` Mike Puchol
  0 siblings, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-16 18:48 UTC (permalink / raw)
  To: Mike Puchol; +Cc: David Lang, Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 12719 bytes --]

I expect the lasers to have 2d gimbles, which lets them track most things in 
their field of view. Remember that Starlink has compressed their orbital planes, 
they are going to be running almost everything in the 550km range (500-600km 
IIRC) and have almost entirely eliminated the ~1000km planes

David Lang

  On Fri, 16 Jul 2021, 
Mike Puchol wrote:

> Date: Fri, 16 Jul 2021 19:42:55 +0200
> From: Mike Puchol <mike@starlink.sx>
> To: David Lang <david@lang.hm>
> Cc: Nathan Owens <nathan@nathan.io>,
>     "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>     David P. Reed <dpreed@deepplum.com>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
> 
> True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.
>
> Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.
>
> Best,
>
> Mike
> On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
>> the speed of light in a vaccum is significantly better than the speed of light
>> in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
>> -> ground station (especially if the ground station is in the target datacenter)
>> can be faster than terminal -> sat -> ground station -> cross-country fiber,
>> even accounting for the longer distance at 550km altitude than at ground level.
>>
>> This has interesting implications for supplementing/replacing undersea cables as
>> the sats over the ocean are not going to be heavily used, dedicated ground
>> stations could be setup that use sats further offshore than normal (and are
>> shielded from sats over land) to leverage the system without interfering
>> significantly with more 'traditional' uses
>>
>> David Lang
>>
>> On Fri, 16 Jul 2021, Mike Puchol wrote:
>>
>>> Date: Fri, 16 Jul 2021 19:31:37 +0200
>>> From: Mike Puchol <mike@starlink.sx>
>>> To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>>> David P. Reed <dpreed@deepplum.com>
>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>
>>> Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
>>>
>>> We are being “sold” optical links for what they are not IMHO.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>>> As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
>>>>
>>>> Possibly, if you do steering to always jump to the lowest latency satellite.
>>>>
>>>>> with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
>>>> Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>>
>>>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>>>> hey, it's a good attitude to have :-)
>>>>>>
>>>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
>>>>>> close to the goal, if not exceed it.
>>>>>>
>>>>>> As there are more staellites, the up down time will get closer to 4-5ms rather
>>>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
>>>>>> routing in orbit, there is the potential for the theoretical minimum to tend
>>>>>> lower, giving some headroom for other overhead but still being in the 20ms
>>>>>> range.
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>>>
>>>>>>> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
>>>>>>> if you do some basic math:
>>>>>>>
>>>>>>>    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>>    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>>    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>>>>>    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>>>>>    - Total one-way delay: 4.3 - 11.1ms
>>>>>>>    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>>>>>
>>>>>>> This includes no transmission delay, queuing delay,
>>>>>>> processing/fragmentation/reassembly/etc, and no time-division multiplexing.
>>>>>>>
>>>>>>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>>>>>
>>>>>>>> I think it depends on if you are looking at datacenter-to-datacenter
>>>>>>>> latency of
>>>>>>>> home to remote datacenter latency :-)
>>>>>>>>
>>>>>>>> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>>>>>>> it's been
>>>>>>>> a few years since I tested it).
>>>>>>>>
>>>>>>>> I note that an article I saw today said that Elon is saying that latency
>>>>>>>> will
>>>>>>>> improve significantly in the near future, that up/down latency is ~20ms
>>>>>>>> and the
>>>>>>>> additional delays pushing it to the 80ms range are 'stupid packet routing'
>>>>>>>> problems that they are working on.
>>>>>>>>
>>>>>>>> If they are still in that level of optimization, it doesn't surprise me
>>>>>>>> that
>>>>>>>> they haven't really focused on the bufferbloat issue, they have more
>>>>>>>> obvious
>>>>>>>> stuff to fix first.
>>>>>>>>
>>>>>>>> David Lang
>>>>>>>>
>>>>>>>>
>>>>>>>>    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>>>>>>
>>>>>>>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>>>>>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>>>>>>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>
>>>>>>>>> Hi David
>>>>>>>>> In terms of the Latency that David (Reed) mentioned for California to
>>>>>>>> Massachusetts of about 17ms over the public internet, it seems a bit faster
>>>>>>>> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>>>>>>> just to get out of the operator network.
>>>>>>>>>
>>>>>>>>> https://www.wondernetwork.com  is a handy tool for checking geographic
>>>>>>>> ping perf between cities, and it shows a min of about 66ms for pings
>>>>>>>> between Boston and San Diego
>>>>>>>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>>>>>>> 1-way transfer).
>>>>>>>>>
>>>>>>>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>>>>>>>> (through a pure fibre link of that distance) the propagation time is just
>>>>>>>> over 20ms. If the network equipment between the Boston and San Diego is
>>>>>>>> factored in, with some buffering along the way, 33ms does seem quite
>>>>>>>> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
>>>>>>>>>
>>>>>>>>> -Ian Wheelock
>>>>>>>>>
>>>>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>>>>>> David Lang <david@lang.hm>
>>>>>>>>> Date: Friday 9 July 2021 at 23:59
>>>>>>>>> To: "David P. Reed" <dpreed@deepplum.com>
>>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>
>>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>>> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
>>>>>>>> and worlds better than geostationary satellite (and many
>>>>>>>>> External (mailto:david@lang.hm)
>>>>>>>>>
>>>>>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>>>>>>   https://www.inky.com/banner-faq/  https://www.inky.com
>>>>>>>>>
>>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>>> 100ms, and
>>>>>>>>> Musk was predicting <40ms.
>>>>>>>>>
>>>>>>>>> roughly competitive with landlines, and worlds better than geostationary
>>>>>>>>> satellite (and many wireless ISPs)
>>>>>>>>>
>>>>>>>>> but when doing any serious testing of latency, you need to be wired to
>>>>>>>> the
>>>>>>>>> router, wifi introduces so much variability that it swamps the signal.
>>>>>>>>>
>>>>>>>>> David Lang
>>>>>>>>>
>>>>>>>>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>>>>>>>
>>>>>>>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>>>>>>>> From: David P. Reed <dpreed@deepplum.com>
>>>>>>>>>> To: starlink@lists.bufferbloat.net
>>>>>>>>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Early measurements of performance of Starlink have shown significant
>>>>>>>> bufferbloat, as Dave Taht has shown.
>>>>>>>>>>
>>>>>>>>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>>>>>>> issue, it should be completely manageable, starting by simple firmware
>>>>>>>> changes inside the Starlink system itself. For example, implementing
>>>>>>>> fq_codel so that bottleneck links just drop packets according to the Best
>>>>>>>> Practices RFC,
>>>>>>>>>>
>>>>>>>>>> So I'm hoping this has improved since Dave's measurements. How much has
>>>>>>>> it improved? What's the current maximum packet latency under full
>>>>>>>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
>>>>>>>> times under full load*, but he wasn't using flent or some other measurement
>>>>>>>> tool of good quality that gives a true number.
>>>>>>>>>>
>>>>>>>>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>>>>>>> want latencies significantly less than 100 msec. as a rule of thumb for
>>>>>>>> teleconferencing quality). But it is better than Dave's measurements showed.
>>>>>>>>>>
>>>>>>>>>> Now Musk bragged that his network was "low latency" unlike other high
>>>>>>>> speed services, which means low end-to-end latency.  That got him
>>>>>>>> permission from the FCC to operate Starlink at all. His number was, I
>>>>>>>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
>>>>>>>> probably meant just the time from the ground station to the terminal
>>>>>>>> through the satellite. But I regularly get 17 msec. between California and
>>>>>>>> Massachusetts over the public Internet)
>>>>>>>>>>
>>>>>>>>>> So 84 might be the current status. That would mean that someone at
>>>>>>>> Srarlink might be paying some attention, but it is a long way from what
>>>>>>>> Musk implied.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> PS: I forget the number of the RFC, but the number of packets queued on
>>>>>>>> an egress link should be chosen by taking the hardware bottleneck
>>>>>>>> throughput of any path, combined with an end-to-end Internet underlying
>>>>>>>> delay of about 10 msec. to account for hops between source and destination.
>>>>>>>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
>>>>>>>> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>>>>>>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
>>>>>>>> each terminal of buffering, total, in the path from terminal to public
>>>>>>>> Internet, assuming the connection to the public Internet is not a problem.
>>>>>>>>> _______________________________________________
>>>>>>>>> Starlink mailing list
>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>
>>>>>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:24           ` David Lang
  2021-07-16 17:29             ` Nathan Owens
@ 2021-07-16 20:51             ` Michael Richardson
  2021-07-18 19:17               ` David Lang
  1 sibling, 1 reply; 37+ messages in thread
From: Michael Richardson @ 2021-07-16 20:51 UTC (permalink / raw)
  To: David Lang, Nathan Owens, starlink, David P. Reed

David Lang <david@lang.hm> wrote:
    > As there are more staellites, the up down time will get closer to 4-5ms
    > rather then the ~7ms you list, and with laser relays in orbit, and terminal
    > to terminal routing in orbit, there is the potential for the theoretical
    > minimum to tend lower, giving some headroom for other overhead but still
    > being in the 20ms range.

I really want this to happen, but how will this get managed?
We will don't know shit, and I'm not convinced SpaceX knows either.

I'm scared that these paths will centrally managed, and not based upon
longest prefix (IPv6) match.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 18:48                     ` David Lang
@ 2021-07-16 20:57                       ` Mike Puchol
  2021-07-16 21:30                         ` David Lang
  0 siblings, 1 reply; 37+ messages in thread
From: Mike Puchol @ 2021-07-16 20:57 UTC (permalink / raw)
  To: David Lang; +Cc: Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 15535 bytes --]

Correct. A mirror tracking head that turns around the perpendicular to the satellite path allows you to track satellites in the same plane, in front or behind, when they change altitude by a few kilometers as part of orbital adjustments or collision avoidance. To have a fully gimbaled head that can track any satellite in any direction (and at any relative velocity!) is a totally different problem. I could see satellites linked to the next longitudinal plane apart from those on the same plane, but cross-plane when one is ascending and the other descending is way harder. The next shells will be at lower altitudes, around 300-350km, and they have also stated they want to go for higher shells at 1000+ km.

Best,

Mike
On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
> I expect the lasers to have 2d gimbles, which lets them track most things in
> their field of view. Remember that Starlink has compressed their orbital planes,
> they are going to be running almost everything in the 550km range (500-600km
> IIRC) and have almost entirely eliminated the ~1000km planes
>
> David Lang
>
> On Fri, 16 Jul 2021,
> Mike Puchol wrote:
>
> > Date: Fri, 16 Jul 2021 19:42:55 +0200
> > From: Mike Puchol <mike@starlink.sx>
> > To: David Lang <david@lang.hm>
> > Cc: Nathan Owens <nathan@nathan.io>,
> > "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > David P. Reed <dpreed@deepplum.com>
> > Subject: Re: [Starlink] Starlink and bufferbloat status?
> >
> > True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.
> >
> > Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.
> >
> > Best,
> >
> > Mike
> > On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
> > > the speed of light in a vaccum is significantly better than the speed of light
> > > in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
> > > -> ground station (especially if the ground station is in the target datacenter)
> > > can be faster than terminal -> sat -> ground station -> cross-country fiber,
> > > even accounting for the longer distance at 550km altitude than at ground level.
> > >
> > > This has interesting implications for supplementing/replacing undersea cables as
> > > the sats over the ocean are not going to be heavily used, dedicated ground
> > > stations could be setup that use sats further offshore than normal (and are
> > > shielded from sats over land) to leverage the system without interfering
> > > significantly with more 'traditional' uses
> > >
> > > David Lang
> > >
> > > On Fri, 16 Jul 2021, Mike Puchol wrote:
> > >
> > > > Date: Fri, 16 Jul 2021 19:31:37 +0200
> > > > From: Mike Puchol <mike@starlink.sx>
> > > > To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
> > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > > > David P. Reed <dpreed@deepplum.com>
> > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > >
> > > > Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
> > > >
> > > > We are being “sold” optical links for what they are not IMHO.
> > > >
> > > > Best,
> > > >
> > > > Mike
> > > > On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> > > > > > As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
> > > > >
> > > > > Possibly, if you do steering to always jump to the lowest latency satellite.
> > > > >
> > > > > > with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
> > > > > Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
> > > > >
> > > > > > On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
> > > > > > > hey, it's a good attitude to have :-)
> > > > > > >
> > > > > > > Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
> > > > > > > close to the goal, if not exceed it.
> > > > > > >
> > > > > > > As there are more staellites, the up down time will get closer to 4-5ms rather
> > > > > > > then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
> > > > > > > routing in orbit, there is the potential for the theoretical minimum to tend
> > > > > > > lower, giving some headroom for other overhead but still being in the 20ms
> > > > > > > range.
> > > > > > >
> > > > > > > David Lang
> > > > > > >
> > > > > > >   On Fri, 16 Jul 2021, Nathan Owens wrote:
> > > > > > >
> > > > > > > > Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> > > > > > > > if you do some basic math:
> > > > > > > >
> > > > > > > >    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > > > >    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > > > >    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> > > > > > > >    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> > > > > > > >    - Total one-way delay: 4.3 - 11.1ms
> > > > > > > >    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> > > > > > > >
> > > > > > > > This includes no transmission delay, queuing delay,
> > > > > > > > processing/fragmentation/reassembly/etc, and no time-division multiplexing.
> > > > > > > >
> > > > > > > > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> > > > > > > >
> > > > > > > > > I think it depends on if you are looking at datacenter-to-datacenter
> > > > > > > > > latency of
> > > > > > > > > home to remote datacenter latency :-)
> > > > > > > > >
> > > > > > > > > my rule of thumb for cross US ping time has been 80-100ms latency (but
> > > > > > > > > it's been
> > > > > > > > > a few years since I tested it).
> > > > > > > > >
> > > > > > > > > I note that an article I saw today said that Elon is saying that latency
> > > > > > > > > will
> > > > > > > > > improve significantly in the near future, that up/down latency is ~20ms
> > > > > > > > > and the
> > > > > > > > > additional delays pushing it to the 80ms range are 'stupid packet routing'
> > > > > > > > > problems that they are working on.
> > > > > > > > >
> > > > > > > > > If they are still in that level of optimization, it doesn't surprise me
> > > > > > > > > that
> > > > > > > > > they haven't really focused on the bufferbloat issue, they have more
> > > > > > > > > obvious
> > > > > > > > > stuff to fix first.
> > > > > > > > >
> > > > > > > > > David Lang
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> > > > > > > > >
> > > > > > > > > > Date: Fri, 16 Jul 2021 10:21:52 +0000
> > > > > > > > > > From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> > > > > > > > > > To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> > > > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > >
> > > > > > > > > > Hi David
> > > > > > > > > > In terms of the Latency that David (Reed) mentioned for California to
> > > > > > > > > Massachusetts of about 17ms over the public internet, it seems a bit faster
> > > > > > > > > than what I would expect. My own traceroute via my VDSL link shows 14ms
> > > > > > > > > just to get out of the operator network.
> > > > > > > > > >
> > > > > > > > > > https://www.wondernetwork.com  is a handy tool for checking geographic
> > > > > > > > > ping perf between cities, and it shows a min of about 66ms for pings
> > > > > > > > > between Boston and San Diego
> > > > > > > > > https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> > > > > > > > > 1-way transfer).
> > > > > > > > > >
> > > > > > > > > > Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> > > > > > > > > (through a pure fibre link of that distance) the propagation time is just
> > > > > > > > > over 20ms. If the network equipment between the Boston and San Diego is
> > > > > > > > > factored in, with some buffering along the way, 33ms does seem quite
> > > > > > > > > reasonable over the 20ms for speed of light in fibre for that 1-way transfer
> > > > > > > > > >
> > > > > > > > > > -Ian Wheelock
> > > > > > > > > >
> > > > > > > > > > From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> > > > > > > > > David Lang <david@lang.hm>
> > > > > > > > > > Date: Friday 9 July 2021 at 23:59
> > > > > > > > > > To: "David P. Reed" <dpreed@deepplum.com>
> > > > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > >
> > > > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > > > 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> > > > > > > > > and worlds better than geostationary satellite (and many
> > > > > > > > > > External (mailto:david@lang.hm)
> > > > > > > > > >
> > > > > > > > > https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
> > > > > > > > >   https://www.inky.com/banner-faq/  https://www.inky.com
> > > > > > > > > >
> > > > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > > > 100ms, and
> > > > > > > > > > Musk was predicting <40ms.
> > > > > > > > > >
> > > > > > > > > > roughly competitive with landlines, and worlds better than geostationary
> > > > > > > > > > satellite (and many wireless ISPs)
> > > > > > > > > >
> > > > > > > > > > but when doing any serious testing of latency, you need to be wired to
> > > > > > > > > the
> > > > > > > > > > router, wifi introduces so much variability that it swamps the signal.
> > > > > > > > > >
> > > > > > > > > > David Lang
> > > > > > > > > >
> > > > > > > > > > On Fri, 9 Jul 2021, David P. Reed wrote:
> > > > > > > > > >
> > > > > > > > > > > Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> > > > > > > > > > > From: David P. Reed <dpreed@deepplum.com>
> > > > > > > > > > > To: starlink@lists.bufferbloat.net
> > > > > > > > > > > Subject: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Early measurements of performance of Starlink have shown significant
> > > > > > > > > bufferbloat, as Dave Taht has shown.
> > > > > > > > > > >
> > > > > > > > > > > But...  Starlink is a moving target. The bufferbloat isn't a hardware
> > > > > > > > > issue, it should be completely manageable, starting by simple firmware
> > > > > > > > > changes inside the Starlink system itself. For example, implementing
> > > > > > > > > fq_codel so that bottleneck links just drop packets according to the Best
> > > > > > > > > Practices RFC,
> > > > > > > > > > >
> > > > > > > > > > > So I'm hoping this has improved since Dave's measurements. How much has
> > > > > > > > > it improved? What's the current maximum packet latency under full
> > > > > > > > > load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> > > > > > > > > times under full load*, but he wasn't using flent or some other measurement
> > > > > > > > > tool of good quality that gives a true number.
> > > > > > > > > > >
> > > > > > > > > > > 84 msec is not great - it's marginal for Zoom quality experience (you
> > > > > > > > > want latencies significantly less than 100 msec. as a rule of thumb for
> > > > > > > > > teleconferencing quality). But it is better than Dave's measurements showed.
> > > > > > > > > > >
> > > > > > > > > > > Now Musk bragged that his network was "low latency" unlike other high
> > > > > > > > > speed services, which means low end-to-end latency.  That got him
> > > > > > > > > permission from the FCC to operate Starlink at all. His number was, I
> > > > > > > > > think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> > > > > > > > > probably meant just the time from the ground station to the terminal
> > > > > > > > > through the satellite. But I regularly get 17 msec. between California and
> > > > > > > > > Massachusetts over the public Internet)
> > > > > > > > > > >
> > > > > > > > > > > So 84 might be the current status. That would mean that someone at
> > > > > > > > > Srarlink might be paying some attention, but it is a long way from what
> > > > > > > > > Musk implied.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > PS: I forget the number of the RFC, but the number of packets queued on
> > > > > > > > > an egress link should be chosen by taking the hardware bottleneck
> > > > > > > > > throughput of any path, combined with an end-to-end Internet underlying
> > > > > > > > > delay of about 10 msec. to account for hops between source and destination.
> > > > > > > > > Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> > > > > > > > > to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> > > > > > > > > about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> > > > > > > > > each terminal of buffering, total, in the path from terminal to public
> > > > > > > > > Internet, assuming the connection to the public Internet is not a problem.
> > > > > > > > > > _______________________________________________
> > > > > > > > > > Starlink mailing list
> > > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > >
> > > > > > > > > https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> > > > > > > > > >
> > > > > > > > > > _______________________________________________
> > > > > > > > > Starlink mailing list
> > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > https://lists.bufferbloat.net/listinfo/starlink
> > > > > > > > >
> > > > > > > >
> > > > > _______________________________________________
> > > > > Starlink mailing list
> > > > > Starlink@lists.bufferbloat.net
> > > > > https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 15476 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 20:57                       ` Mike Puchol
@ 2021-07-16 21:30                         ` David Lang
  2021-07-16 21:40                           ` Mike Puchol
  0 siblings, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-16 21:30 UTC (permalink / raw)
  To: Mike Puchol; +Cc: David Lang, Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 14691 bytes --]

at satellite distances, you need to adjust your vertical direction depending on 
how far away the satellite you are talking to is, even if it's at the same 
altitude

the difference between shells that are only a few KM apart is less than the 
angles that you could need to satellites in the same shell further away.

David Lang

  On Fri, 16 Jul 2021, Mike Puchol wrote:

> Date: Fri, 16 Jul 2021 22:57:14 +0200
> From: Mike Puchol <mike@starlink.sx>
> To: David Lang <david@lang.hm>
> Cc: Nathan Owens <nathan@nathan.io>,
>     "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>     David P. Reed <dpreed@deepplum.com>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
> 
> Correct. A mirror tracking head that turns around the perpendicular to the satellite path allows you to track satellites in the same plane, in front or behind, when they change altitude by a few kilometers as part of orbital adjustments or collision avoidance. To have a fully gimbaled head that can track any satellite in any direction (and at any relative velocity!) is a totally different problem. I could see satellites linked to the next longitudinal plane apart from those on the same plane, but cross-plane when one is ascending and the other descending is way harder. The next shells will be at lower altitudes, around 300-350km, and they have also stated they want to go for higher shells at 1000+ km.
>
> Best,
>
> Mike
> On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
>> I expect the lasers to have 2d gimbles, which lets them track most things in
>> their field of view. Remember that Starlink has compressed their orbital planes,
>> they are going to be running almost everything in the 550km range (500-600km
>> IIRC) and have almost entirely eliminated the ~1000km planes
>>
>> David Lang
>>
>> On Fri, 16 Jul 2021,
>> Mike Puchol wrote:
>>
>>> Date: Fri, 16 Jul 2021 19:42:55 +0200
>>> From: Mike Puchol <mike@starlink.sx>
>>> To: David Lang <david@lang.hm>
>>> Cc: Nathan Owens <nathan@nathan.io>,
>>> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>>> David P. Reed <dpreed@deepplum.com>
>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>
>>> True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.
>>>
>>> Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
>>>> the speed of light in a vaccum is significantly better than the speed of light
>>>> in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
>>>> -> ground station (especially if the ground station is in the target datacenter)
>>>> can be faster than terminal -> sat -> ground station -> cross-country fiber,
>>>> even accounting for the longer distance at 550km altitude than at ground level.
>>>>
>>>> This has interesting implications for supplementing/replacing undersea cables as
>>>> the sats over the ocean are not going to be heavily used, dedicated ground
>>>> stations could be setup that use sats further offshore than normal (and are
>>>> shielded from sats over land) to leverage the system without interfering
>>>> significantly with more 'traditional' uses
>>>>
>>>> David Lang
>>>>
>>>> On Fri, 16 Jul 2021, Mike Puchol wrote:
>>>>
>>>>> Date: Fri, 16 Jul 2021 19:31:37 +0200
>>>>> From: Mike Puchol <mike@starlink.sx>
>>>>> To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>>>>> David P. Reed <dpreed@deepplum.com>
>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>
>>>>> Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
>>>>>
>>>>> We are being “sold” optical links for what they are not IMHO.
>>>>>
>>>>> Best,
>>>>>
>>>>> Mike
>>>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>>>>> As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
>>>>>>
>>>>>> Possibly, if you do steering to always jump to the lowest latency satellite.
>>>>>>
>>>>>>> with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
>>>>>> Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>>>>
>>>>>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>>>>>> hey, it's a good attitude to have :-)
>>>>>>>>
>>>>>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
>>>>>>>> close to the goal, if not exceed it.
>>>>>>>>
>>>>>>>> As there are more staellites, the up down time will get closer to 4-5ms rather
>>>>>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
>>>>>>>> routing in orbit, there is the potential for the theoretical minimum to tend
>>>>>>>> lower, giving some headroom for other overhead but still being in the 20ms
>>>>>>>> range.
>>>>>>>>
>>>>>>>> David Lang
>>>>>>>>
>>>>>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>>>>>
>>>>>>>>> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
>>>>>>>>> if you do some basic math:
>>>>>>>>>
>>>>>>>>>    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>>>>    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>>>>    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>>>>>>>    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>>>>>>>    - Total one-way delay: 4.3 - 11.1ms
>>>>>>>>>    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>>>>>>>
>>>>>>>>> This includes no transmission delay, queuing delay,
>>>>>>>>> processing/fragmentation/reassembly/etc, and no time-division multiplexing.
>>>>>>>>>
>>>>>>>>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>>>>>>>
>>>>>>>>>> I think it depends on if you are looking at datacenter-to-datacenter
>>>>>>>>>> latency of
>>>>>>>>>> home to remote datacenter latency :-)
>>>>>>>>>>
>>>>>>>>>> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>>>>>>>>> it's been
>>>>>>>>>> a few years since I tested it).
>>>>>>>>>>
>>>>>>>>>> I note that an article I saw today said that Elon is saying that latency
>>>>>>>>>> will
>>>>>>>>>> improve significantly in the near future, that up/down latency is ~20ms
>>>>>>>>>> and the
>>>>>>>>>> additional delays pushing it to the 80ms range are 'stupid packet routing'
>>>>>>>>>> problems that they are working on.
>>>>>>>>>>
>>>>>>>>>> If they are still in that level of optimization, it doesn't surprise me
>>>>>>>>>> that
>>>>>>>>>> they haven't really focused on the bufferbloat issue, they have more
>>>>>>>>>> obvious
>>>>>>>>>> stuff to fix first.
>>>>>>>>>>
>>>>>>>>>> David Lang
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>>>>>>>>
>>>>>>>>>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>>>>>>>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>>>>>>>>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>>
>>>>>>>>>>> Hi David
>>>>>>>>>>> In terms of the Latency that David (Reed) mentioned for California to
>>>>>>>>>> Massachusetts of about 17ms over the public internet, it seems a bit faster
>>>>>>>>>> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>>>>>>>>> just to get out of the operator network.
>>>>>>>>>>>
>>>>>>>>>>> https://www.wondernetwork.com  is a handy tool for checking geographic
>>>>>>>>>> ping perf between cities, and it shows a min of about 66ms for pings
>>>>>>>>>> between Boston and San Diego
>>>>>>>>>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>>>>>>>>> 1-way transfer).
>>>>>>>>>>>
>>>>>>>>>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>>>>>>>>>> (through a pure fibre link of that distance) the propagation time is just
>>>>>>>>>> over 20ms. If the network equipment between the Boston and San Diego is
>>>>>>>>>> factored in, with some buffering along the way, 33ms does seem quite
>>>>>>>>>> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
>>>>>>>>>>>
>>>>>>>>>>> -Ian Wheelock
>>>>>>>>>>>
>>>>>>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>>>>>>>> David Lang <david@lang.hm>
>>>>>>>>>>> Date: Friday 9 July 2021 at 23:59
>>>>>>>>>>> To: "David P. Reed" <dpreed@deepplum.com>
>>>>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>>
>>>>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>>>>> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
>>>>>>>>>> and worlds better than geostationary satellite (and many
>>>>>>>>>>> External (mailto:david@lang.hm)
>>>>>>>>>>>
>>>>>>>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>>>>>>>>   https://www.inky.com/banner-faq/  https://www.inky.com
>>>>>>>>>>>
>>>>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>>>>> 100ms, and
>>>>>>>>>>> Musk was predicting <40ms.
>>>>>>>>>>>
>>>>>>>>>>> roughly competitive with landlines, and worlds better than geostationary
>>>>>>>>>>> satellite (and many wireless ISPs)
>>>>>>>>>>>
>>>>>>>>>>> but when doing any serious testing of latency, you need to be wired to
>>>>>>>>>> the
>>>>>>>>>>> router, wifi introduces so much variability that it swamps the signal.
>>>>>>>>>>>
>>>>>>>>>>> David Lang
>>>>>>>>>>>
>>>>>>>>>>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>>>>>>>>>> From: David P. Reed <dpreed@deepplum.com>
>>>>>>>>>>>> To: starlink@lists.bufferbloat.net
>>>>>>>>>>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Early measurements of performance of Starlink have shown significant
>>>>>>>>>> bufferbloat, as Dave Taht has shown.
>>>>>>>>>>>>
>>>>>>>>>>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>>>>>>>>> issue, it should be completely manageable, starting by simple firmware
>>>>>>>>>> changes inside the Starlink system itself. For example, implementing
>>>>>>>>>> fq_codel so that bottleneck links just drop packets according to the Best
>>>>>>>>>> Practices RFC,
>>>>>>>>>>>>
>>>>>>>>>>>> So I'm hoping this has improved since Dave's measurements. How much has
>>>>>>>>>> it improved? What's the current maximum packet latency under full
>>>>>>>>>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
>>>>>>>>>> times under full load*, but he wasn't using flent or some other measurement
>>>>>>>>>> tool of good quality that gives a true number.
>>>>>>>>>>>>
>>>>>>>>>>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>>>>>>>>> want latencies significantly less than 100 msec. as a rule of thumb for
>>>>>>>>>> teleconferencing quality). But it is better than Dave's measurements showed.
>>>>>>>>>>>>
>>>>>>>>>>>> Now Musk bragged that his network was "low latency" unlike other high
>>>>>>>>>> speed services, which means low end-to-end latency.  That got him
>>>>>>>>>> permission from the FCC to operate Starlink at all. His number was, I
>>>>>>>>>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
>>>>>>>>>> probably meant just the time from the ground station to the terminal
>>>>>>>>>> through the satellite. But I regularly get 17 msec. between California and
>>>>>>>>>> Massachusetts over the public Internet)
>>>>>>>>>>>>
>>>>>>>>>>>> So 84 might be the current status. That would mean that someone at
>>>>>>>>>> Srarlink might be paying some attention, but it is a long way from what
>>>>>>>>>> Musk implied.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> PS: I forget the number of the RFC, but the number of packets queued on
>>>>>>>>>> an egress link should be chosen by taking the hardware bottleneck
>>>>>>>>>> throughput of any path, combined with an end-to-end Internet underlying
>>>>>>>>>> delay of about 10 msec. to account for hops between source and destination.
>>>>>>>>>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
>>>>>>>>>> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>>>>>>>>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
>>>>>>>>>> each terminal of buffering, total, in the path from terminal to public
>>>>>>>>>> Internet, assuming the connection to the public Internet is not a problem.
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>>>
>>>>>>>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>> Starlink mailing list
>>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>
>>>>>>>>>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list
>>>>>> Starlink@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 21:30                         ` David Lang
@ 2021-07-16 21:40                           ` Mike Puchol
  2021-07-16 22:40                             ` Jeremy Austin
                                               ` (2 more replies)
  0 siblings, 3 replies; 37+ messages in thread
From: Mike Puchol @ 2021-07-16 21:40 UTC (permalink / raw)
  To: David Lang; +Cc: Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 17988 bytes --]

If we understand shell as “group of satellites at a certain altitude range”, there is not much point in linking between shells if you can link within one shell and orbital plane, and that plane has at least one satellite within range of a gateway. I could be proven wrong, but IMHO the first generation of links are meant of intra-plane, and maybe at a stretch cross-plane to the next plane East or West.

The only way to eventually go is optical links to the ground too, as RF will only get you so far. At that stage, every shell will have its own optical links to the ground, with gateways placed in areas with little average cloud cover.

Best,

Mike
On Jul 16, 2021, 23:30 +0200, David Lang <david@lang.hm>, wrote:
> at satellite distances, you need to adjust your vertical direction depending on
> how far away the satellite you are talking to is, even if it's at the same
> altitude
>
> the difference between shells that are only a few KM apart is less than the
> angles that you could need to satellites in the same shell further away.
>
> David Lang
>
> On Fri, 16 Jul 2021, Mike Puchol wrote:
>
> > Date: Fri, 16 Jul 2021 22:57:14 +0200
> > From: Mike Puchol <mike@starlink.sx>
> > To: David Lang <david@lang.hm>
> > Cc: Nathan Owens <nathan@nathan.io>,
> > "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > David P. Reed <dpreed@deepplum.com>
> > Subject: Re: [Starlink] Starlink and bufferbloat status?
> >
> > Correct. A mirror tracking head that turns around the perpendicular to the satellite path allows you to track satellites in the same plane, in front or behind, when they change altitude by a few kilometers as part of orbital adjustments or collision avoidance. To have a fully gimbaled head that can track any satellite in any direction (and at any relative velocity!) is a totally different problem. I could see satellites linked to the next longitudinal plane apart from those on the same plane, but cross-plane when one is ascending and the other descending is way harder. The next shells will be at lower altitudes, around 300-350km, and they have also stated they want to go for higher shells at 1000+ km.
> >
> > Best,
> >
> > Mike
> > On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
> > > I expect the lasers to have 2d gimbles, which lets them track most things in
> > > their field of view. Remember that Starlink has compressed their orbital planes,
> > > they are going to be running almost everything in the 550km range (500-600km
> > > IIRC) and have almost entirely eliminated the ~1000km planes
> > >
> > > David Lang
> > >
> > > On Fri, 16 Jul 2021,
> > > Mike Puchol wrote:
> > >
> > > > Date: Fri, 16 Jul 2021 19:42:55 +0200
> > > > From: Mike Puchol <mike@starlink.sx>
> > > > To: David Lang <david@lang.hm>
> > > > Cc: Nathan Owens <nathan@nathan.io>,
> > > > "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > > > David P. Reed <dpreed@deepplum.com>
> > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > >
> > > > True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.
> > > >
> > > > Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.
> > > >
> > > > Best,
> > > >
> > > > Mike
> > > > On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
> > > > > the speed of light in a vaccum is significantly better than the speed of light
> > > > > in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
> > > > > -> ground station (especially if the ground station is in the target datacenter)
> > > > > can be faster than terminal -> sat -> ground station -> cross-country fiber,
> > > > > even accounting for the longer distance at 550km altitude than at ground level.
> > > > >
> > > > > This has interesting implications for supplementing/replacing undersea cables as
> > > > > the sats over the ocean are not going to be heavily used, dedicated ground
> > > > > stations could be setup that use sats further offshore than normal (and are
> > > > > shielded from sats over land) to leverage the system without interfering
> > > > > significantly with more 'traditional' uses
> > > > >
> > > > > David Lang
> > > > >
> > > > > On Fri, 16 Jul 2021, Mike Puchol wrote:
> > > > >
> > > > > > Date: Fri, 16 Jul 2021 19:31:37 +0200
> > > > > > From: Mike Puchol <mike@starlink.sx>
> > > > > > To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
> > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > > > > > David P. Reed <dpreed@deepplum.com>
> > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > >
> > > > > > Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
> > > > > >
> > > > > > We are being “sold” optical links for what they are not IMHO.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > Mike
> > > > > > On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> > > > > > > > As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
> > > > > > >
> > > > > > > Possibly, if you do steering to always jump to the lowest latency satellite.
> > > > > > >
> > > > > > > > with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
> > > > > > > Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
> > > > > > >
> > > > > > > > On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
> > > > > > > > > hey, it's a good attitude to have :-)
> > > > > > > > >
> > > > > > > > > Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
> > > > > > > > > close to the goal, if not exceed it.
> > > > > > > > >
> > > > > > > > > As there are more staellites, the up down time will get closer to 4-5ms rather
> > > > > > > > > then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
> > > > > > > > > routing in orbit, there is the potential for the theoretical minimum to tend
> > > > > > > > > lower, giving some headroom for other overhead but still being in the 20ms
> > > > > > > > > range.
> > > > > > > > >
> > > > > > > > > David Lang
> > > > > > > > >
> > > > > > > > >   On Fri, 16 Jul 2021, Nathan Owens wrote:
> > > > > > > > >
> > > > > > > > > > Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> > > > > > > > > > if you do some basic math:
> > > > > > > > > >
> > > > > > > > > >    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > > > > > >    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > > > > > >    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> > > > > > > > > >    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> > > > > > > > > >    - Total one-way delay: 4.3 - 11.1ms
> > > > > > > > > >    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> > > > > > > > > >
> > > > > > > > > > This includes no transmission delay, queuing delay,
> > > > > > > > > > processing/fragmentation/reassembly/etc, and no time-division multiplexing.
> > > > > > > > > >
> > > > > > > > > > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> > > > > > > > > >
> > > > > > > > > > > I think it depends on if you are looking at datacenter-to-datacenter
> > > > > > > > > > > latency of
> > > > > > > > > > > home to remote datacenter latency :-)
> > > > > > > > > > >
> > > > > > > > > > > my rule of thumb for cross US ping time has been 80-100ms latency (but
> > > > > > > > > > > it's been
> > > > > > > > > > > a few years since I tested it).
> > > > > > > > > > >
> > > > > > > > > > > I note that an article I saw today said that Elon is saying that latency
> > > > > > > > > > > will
> > > > > > > > > > > improve significantly in the near future, that up/down latency is ~20ms
> > > > > > > > > > > and the
> > > > > > > > > > > additional delays pushing it to the 80ms range are 'stupid packet routing'
> > > > > > > > > > > problems that they are working on.
> > > > > > > > > > >
> > > > > > > > > > > If they are still in that level of optimization, it doesn't surprise me
> > > > > > > > > > > that
> > > > > > > > > > > they haven't really focused on the bufferbloat issue, they have more
> > > > > > > > > > > obvious
> > > > > > > > > > > stuff to fix first.
> > > > > > > > > > >
> > > > > > > > > > > David Lang
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Date: Fri, 16 Jul 2021 10:21:52 +0000
> > > > > > > > > > > > From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> > > > > > > > > > > > To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> > > > > > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > > >
> > > > > > > > > > > > Hi David
> > > > > > > > > > > > In terms of the Latency that David (Reed) mentioned for California to
> > > > > > > > > > > Massachusetts of about 17ms over the public internet, it seems a bit faster
> > > > > > > > > > > than what I would expect. My own traceroute via my VDSL link shows 14ms
> > > > > > > > > > > just to get out of the operator network.
> > > > > > > > > > > >
> > > > > > > > > > > > https://www.wondernetwork.com  is a handy tool for checking geographic
> > > > > > > > > > > ping perf between cities, and it shows a min of about 66ms for pings
> > > > > > > > > > > between Boston and San Diego
> > > > > > > > > > > https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> > > > > > > > > > > 1-way transfer).
> > > > > > > > > > > >
> > > > > > > > > > > > Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> > > > > > > > > > > (through a pure fibre link of that distance) the propagation time is just
> > > > > > > > > > > over 20ms. If the network equipment between the Boston and San Diego is
> > > > > > > > > > > factored in, with some buffering along the way, 33ms does seem quite
> > > > > > > > > > > reasonable over the 20ms for speed of light in fibre for that 1-way transfer
> > > > > > > > > > > >
> > > > > > > > > > > > -Ian Wheelock
> > > > > > > > > > > >
> > > > > > > > > > > > From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> > > > > > > > > > > David Lang <david@lang.hm>
> > > > > > > > > > > > Date: Friday 9 July 2021 at 23:59
> > > > > > > > > > > > To: "David P. Reed" <dpreed@deepplum.com>
> > > > > > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > > >
> > > > > > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > > > > > 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> > > > > > > > > > > and worlds better than geostationary satellite (and many
> > > > > > > > > > > > External (mailto:david@lang.hm)
> > > > > > > > > > > >
> > > > > > > > > > > https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
> > > > > > > > > > >   https://www.inky.com/banner-faq/  https://www.inky.com
> > > > > > > > > > > >
> > > > > > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > > > > > 100ms, and
> > > > > > > > > > > > Musk was predicting <40ms.
> > > > > > > > > > > >
> > > > > > > > > > > > roughly competitive with landlines, and worlds better than geostationary
> > > > > > > > > > > > satellite (and many wireless ISPs)
> > > > > > > > > > > >
> > > > > > > > > > > > but when doing any serious testing of latency, you need to be wired to
> > > > > > > > > > > the
> > > > > > > > > > > > router, wifi introduces so much variability that it swamps the signal.
> > > > > > > > > > > >
> > > > > > > > > > > > David Lang
> > > > > > > > > > > >
> > > > > > > > > > > > On Fri, 9 Jul 2021, David P. Reed wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> > > > > > > > > > > > > From: David P. Reed <dpreed@deepplum.com>
> > > > > > > > > > > > > To: starlink@lists.bufferbloat.net
> > > > > > > > > > > > > Subject: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Early measurements of performance of Starlink have shown significant
> > > > > > > > > > > bufferbloat, as Dave Taht has shown.
> > > > > > > > > > > > >
> > > > > > > > > > > > > But...  Starlink is a moving target. The bufferbloat isn't a hardware
> > > > > > > > > > > issue, it should be completely manageable, starting by simple firmware
> > > > > > > > > > > changes inside the Starlink system itself. For example, implementing
> > > > > > > > > > > fq_codel so that bottleneck links just drop packets according to the Best
> > > > > > > > > > > Practices RFC,
> > > > > > > > > > > > >
> > > > > > > > > > > > > So I'm hoping this has improved since Dave's measurements. How much has
> > > > > > > > > > > it improved? What's the current maximum packet latency under full
> > > > > > > > > > > load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> > > > > > > > > > > times under full load*, but he wasn't using flent or some other measurement
> > > > > > > > > > > tool of good quality that gives a true number.
> > > > > > > > > > > > >
> > > > > > > > > > > > > 84 msec is not great - it's marginal for Zoom quality experience (you
> > > > > > > > > > > want latencies significantly less than 100 msec. as a rule of thumb for
> > > > > > > > > > > teleconferencing quality). But it is better than Dave's measurements showed.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Now Musk bragged that his network was "low latency" unlike other high
> > > > > > > > > > > speed services, which means low end-to-end latency.  That got him
> > > > > > > > > > > permission from the FCC to operate Starlink at all. His number was, I
> > > > > > > > > > > think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> > > > > > > > > > > probably meant just the time from the ground station to the terminal
> > > > > > > > > > > through the satellite. But I regularly get 17 msec. between California and
> > > > > > > > > > > Massachusetts over the public Internet)
> > > > > > > > > > > > >
> > > > > > > > > > > > > So 84 might be the current status. That would mean that someone at
> > > > > > > > > > > Srarlink might be paying some attention, but it is a long way from what
> > > > > > > > > > > Musk implied.
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > PS: I forget the number of the RFC, but the number of packets queued on
> > > > > > > > > > > an egress link should be chosen by taking the hardware bottleneck
> > > > > > > > > > > throughput of any path, combined with an end-to-end Internet underlying
> > > > > > > > > > > delay of about 10 msec. to account for hops between source and destination.
> > > > > > > > > > > Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> > > > > > > > > > > to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> > > > > > > > > > > about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> > > > > > > > > > > each terminal of buffering, total, in the path from terminal to public
> > > > > > > > > > > Internet, assuming the connection to the public Internet is not a problem.
> > > > > > > > > > > > _______________________________________________
> > > > > > > > > > > > Starlink mailing list
> > > > > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > > > >
> > > > > > > > > > > https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> > > > > > > > > > > >
> > > > > > > > > > > > _______________________________________________
> > > > > > > > > > > Starlink mailing list
> > > > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > > > https://lists.bufferbloat.net/listinfo/starlink
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > _______________________________________________
> > > > > > > Starlink mailing list
> > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > https://lists.bufferbloat.net/listinfo/starlink

[-- Attachment #2: Type: text/html, Size: 17174 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 21:40                           ` Mike Puchol
@ 2021-07-16 22:40                             ` Jeremy Austin
  2021-07-16 23:04                               ` Nathan Owens
  2021-07-17  1:12                             ` [Starlink] " David Lang
       [not found]                             ` <d86d6590b6f24dfa8f9775ed3bb3206c@DM6PR05MB5915.namprd05.prod.outlook.com>
  2 siblings, 1 reply; 37+ messages in thread
From: Jeremy Austin @ 2021-07-16 22:40 UTC (permalink / raw)
  To: Mike Puchol; +Cc: David Lang, David P. Reed, starlink

[-- Attachment #1: Type: text/plain, Size: 15607 bytes --]

I agree that RF is constrained in total capacity compared to optical
frequencies. At the risk of showing my ignorance by quoting from Wikipedia,
“Atmospheric and fog attenuation, which are exponential in nature, limit
practical range of FSO devices to several kilometres.”

Is there a safe and legal FSO mechanism that works over these distances
through atmosphere shell-to-ground? And the power requirements suitable for
a StarLink-sized single, solar-powered system?

Optics through a vacuum (or near vacuum) are a totally different story.
Intra-satellite links make perfect sense once the cost comes down.

Willing to be corrected but skeptical of the sky-to-ground link budget,

Jeremy Austin

On Fri, Jul 16, 2021 at 1:40 PM Mike Puchol <mike@starlink.sx> wrote:

> If we understand shell as “group of satellites at a certain altitude
> range”, there is not much point in linking between shells if you can link
> within one shell and orbital plane, and that plane has at least one
> satellite within range of a gateway. I could be proven wrong, but IMHO the
> first generation of links are meant of intra-plane, and maybe at a stretch
> cross-plane to the next plane East or West.
>
> The only way to eventually go is optical links to the ground too, as RF
> will only get you so far. At that stage, every shell will have its own
> optical links to the ground, with gateways placed in areas with little
> average cloud cover.
>
> Best,
>
> Mike
> On Jul 16, 2021, 23:30 +0200, David Lang <david@lang.hm>, wrote:
>
> at satellite distances, you need to adjust your vertical direction
> depending on
> how far away the satellite you are talking to is, even if it's at the same
> altitude
>
> the difference between shells that are only a few KM apart is less than the
> angles that you could need to satellites in the same shell further away.
>
> David Lang
>
> On Fri, 16 Jul 2021, Mike Puchol wrote:
>
> Date: Fri, 16 Jul 2021 22:57:14 +0200
> From: Mike Puchol <mike@starlink.sx>
> To: David Lang <david@lang.hm>
> Cc: Nathan Owens <nathan@nathan.io>,
> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> David P. Reed <dpreed@deepplum.com>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
>
> Correct. A mirror tracking head that turns around the perpendicular to the
> satellite path allows you to track satellites in the same plane, in front
> or behind, when they change altitude by a few kilometers as part of orbital
> adjustments or collision avoidance. To have a fully gimbaled head that can
> track any satellite in any direction (and at any relative velocity!) is a
> totally different problem. I could see satellites linked to the next
> longitudinal plane apart from those on the same plane, but cross-plane when
> one is ascending and the other descending is way harder. The next shells
> will be at lower altitudes, around 300-350km, and they have also stated
> they want to go for higher shells at 1000+ km.
>
> Best,
>
> Mike
> On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
>
> I expect the lasers to have 2d gimbles, which lets them track most things
> in
> their field of view. Remember that Starlink has compressed their orbital
> planes,
> they are going to be running almost everything in the 550km range
> (500-600km
> IIRC) and have almost entirely eliminated the ~1000km planes
>
> David Lang
>
> On Fri, 16 Jul 2021,
> Mike Puchol wrote:
>
> Date: Fri, 16 Jul 2021 19:42:55 +0200
> From: Mike Puchol <mike@starlink.sx>
> To: David Lang <david@lang.hm>
> Cc: Nathan Owens <nathan@nathan.io>,
> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> David P. Reed <dpreed@deepplum.com>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
>
> True, but we are then assuming that the optical links are a mesh between
> satellites in the same plane, plus between planes. From an engineering
> problem point of view, keeping optical links in-plane only makes the system
> extremely simpler (no full FOV gimbals with the optical train in them, for
> example), and it solves the issue, as it is highly likely that at least one
> satellite in any given plane will be within reach of a gateway.
>
> Routing to an arbitrary gateway may involve passing via intermediate
> gateways, ground segments, and even using terminals as a hopping point.
>
> Best,
>
> Mike
> On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
>
> the speed of light in a vaccum is significantly better than the speed of
> light
> in fiber, so if you are doing a cross country hop, terminal -> sat -> sat
> -> sat
> -> ground station (especially if the ground station is in the target
> datacenter)
> can be faster than terminal -> sat -> ground station -> cross-country
> fiber,
> even accounting for the longer distance at 550km altitude than at ground
> level.
>
> This has interesting implications for supplementing/replacing undersea
> cables as
> the sats over the ocean are not going to be heavily used, dedicated ground
> stations could be setup that use sats further offshore than normal (and are
> shielded from sats over land) to leverage the system without interfering
> significantly with more 'traditional' uses
>
> David Lang
>
> On Fri, 16 Jul 2021, Mike Puchol wrote:
>
> Date: Fri, 16 Jul 2021 19:31:37 +0200
> From: Mike Puchol <mike@starlink.sx>
> To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> David P. Reed <dpreed@deepplum.com>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
>
> Satellite optical links are useful to extend coverage to areas where you
> don’t have gateways - thus, they will introduce additional latency compared
> to two space segment hops (terminal to satellite -> satellite to gateway).
> If you have terminal to satellite, two optical hops, then final satellite
> to gateway, you will have more latency, not less.
>
> We are being “sold” optical links for what they are not IMHO.
>
> Best,
>
> Mike
> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>
> As there are more satellites, the up down time will get closer to 4-5ms
> rather then the ~7ms you list
>
>
> Possibly, if you do steering to always jump to the lowest latency
> satellite.
>
> with laser relays in orbit, and terminal to terminal routing in orbit,
> there is the potential for the theoretical minimum to tend lower
>
> Maybe for certain users really in the middle of nowhere, but I did the
> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>
> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>
> hey, it's a good attitude to have :-)
>
> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
> very
> close to the goal, if not exceed it.
>
> As there are more staellites, the up down time will get closer to 4-5ms
> rather
> then the ~7ms you list, and with laser relays in orbit, and terminal to
> terminal
> routing in orbit, there is the potential for the theoretical minimum to
> tend
> lower, giving some headroom for other overhead but still being in the 20ms
> range.
>
> David Lang
>
>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>
> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> if you do some basic math:
>
>    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>    - Total one-way delay: 4.3 - 11.1ms
>    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>
> This includes no transmission delay, queuing delay,
> processing/fragmentation/reassembly/etc, and no time-division multiplexing.
>
> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>
> I think it depends on if you are looking at datacenter-to-datacenter
> latency of
> home to remote datacenter latency :-)
>
> my rule of thumb for cross US ping time has been 80-100ms latency (but
> it's been
> a few years since I tested it).
>
> I note that an article I saw today said that Elon is saying that latency
> will
> improve significantly in the near future, that up/down latency is ~20ms
> and the
> additional delays pushing it to the 80ms range are 'stupid packet routing'
> problems that they are working on.
>
> If they are still in that level of optimization, it doesn't surprise me
> that
> they haven't really focused on the bufferbloat issue, they have more
> obvious
> stuff to fix first.
>
> David Lang
>
>
>    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>
> Date: Fri, 16 Jul 2021 10:21:52 +0000
> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
>
> Hi David
> In terms of the Latency that David (Reed) mentioned for California to
>
> Massachusetts of about 17ms over the public internet, it seems a bit faster
> than what I would expect. My own traceroute via my VDSL link shows 14ms
> just to get out of the operator network.
>
>
> https://www.wondernetwork.com  is a handy tool for checking geographic
>
> ping perf between cities, and it shows a min of about 66ms for pings
> between Boston and San Diego
> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
> 1-way transfer).
>
>
> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>
> (through a pure fibre link of that distance) the propagation time is just
> over 20ms. If the network equipment between the Boston and San Diego is
> factored in, with some buffering along the way, 33ms does seem quite
> reasonable over the 20ms for speed of light in fibre for that 1-way
> transfer
>
>
> -Ian Wheelock
>
> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>
> David Lang <david@lang.hm>
>
> Date: Friday 9 July 2021 at 23:59
> To: "David P. Reed" <dpreed@deepplum.com>
> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] Starlink and bufferbloat status?
>
> IIRC, the definition of 'low latency' for the FCC was something like
>
> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> and worlds better than geostationary satellite (and many
>
> External (mailto:david@lang.hm)
>
>
> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>   https://www.inky.com/banner-faq/  https://www.inky.com
>
>
> IIRC, the definition of 'low latency' for the FCC was something like
>
> 100ms, and
>
> Musk was predicting <40ms.
>
> roughly competitive with landlines, and worlds better than geostationary
> satellite (and many wireless ISPs)
>
> but when doing any serious testing of latency, you need to be wired to
>
> the
>
> router, wifi introduces so much variability that it swamps the signal.
>
> David Lang
>
> On Fri, 9 Jul 2021, David P. Reed wrote:
>
> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> From: David P. Reed <dpreed@deepplum.com>
> To: starlink@lists.bufferbloat.net
> Subject: [Starlink] Starlink and bufferbloat status?
>
>
> Early measurements of performance of Starlink have shown significant
>
> bufferbloat, as Dave Taht has shown.
>
>
> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>
> issue, it should be completely manageable, starting by simple firmware
> changes inside the Starlink system itself. For example, implementing
> fq_codel so that bottleneck links just drop packets according to the Best
> Practices RFC,
>
>
> So I'm hoping this has improved since Dave's measurements. How much has
>
> it improved? What's the current maximum packet latency under full
> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> times under full load*, but he wasn't using flent or some other measurement
> tool of good quality that gives a true number.
>
>
> 84 msec is not great - it's marginal for Zoom quality experience (you
>
> want latencies significantly less than 100 msec. as a rule of thumb for
> teleconferencing quality). But it is better than Dave's measurements
> showed.
>
>
> Now Musk bragged that his network was "low latency" unlike other high
>
> speed services, which means low end-to-end latency.  That got him
> permission from the FCC to operate Starlink at all. His number was, I
> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> probably meant just the time from the ground station to the terminal
> through the satellite. But I regularly get 17 msec. between California and
> Massachusetts over the public Internet)
>
>
> So 84 might be the current status. That would mean that someone at
>
> Srarlink might be paying some attention, but it is a long way from what
> Musk implied.
>
>
>
> PS: I forget the number of the RFC, but the number of packets queued on
>
> an egress link should be chosen by taking the hardware bottleneck
> throughput of any path, combined with an end-to-end Internet underlying
> delay of about 10 msec. to account for hops between source and destination.
> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> each terminal of buffering, total, in the path from terminal to public
> Internet, assuming the connection to the public Internet is not a problem.
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
>
>
> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
>
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
-- 


<https://preseem.com/>
  <https://www.facebook.com/preseem/> <https://twitter.com/preseem>
<https://www.youtube.com/channel/UCLGfpuWwXcimzpxK3IIDgzg/videos>
<https://www.linkedin.com/company/4287641/>
     Jeremy Austin

      Sr. Product Manager

      *Preseem | Aterlo Networks*

      Book a Call: https://app.hubspot.com/meetings/jeremy548




1 833 773 7336 x718 <1+833+773+7336+718>
jeremy@preseem.com
preseem.com
         <https://preseem.com/stay-connected/>

[-- Attachment #2: Type: text/html, Size: 29244 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 22:40                             ` Jeremy Austin
@ 2021-07-16 23:04                               ` Nathan Owens
  2021-07-17 10:02                                 ` [Starlink] Free Space Optics - was " Michiel Leenaars
  0 siblings, 1 reply; 37+ messages in thread
From: Nathan Owens @ 2021-07-16 23:04 UTC (permalink / raw)
  To: Jeremy Austin; +Cc: David P. Reed, Mike Puchol, starlink

[-- Attachment #1: Type: text/plain, Size: 16497 bytes --]

Check out the NASA TBIRD mission coming up, 2x 100GbE coherent optics and
an EDFA, beaming back to a 12in telescope on earth — hopefully it works!
Basically all COTS components.

On Fri, Jul 16, 2021 at 3:40 PM Jeremy Austin <jeremy@aterlo.com> wrote:

> I agree that RF is constrained in total capacity compared to optical
> frequencies. At the risk of showing my ignorance by quoting from Wikipedia,
> “Atmospheric and fog attenuation, which are exponential in nature, limit
> practical range of FSO devices to several kilometres.”
>
> Is there a safe and legal FSO mechanism that works over these distances
> through atmosphere shell-to-ground? And the power requirements suitable for
> a StarLink-sized single, solar-powered system?
>
> Optics through a vacuum (or near vacuum) are a totally different story.
> Intra-satellite links make perfect sense once the cost comes down.
>
> Willing to be corrected but skeptical of the sky-to-ground link budget,
>
> Jeremy Austin
>
> On Fri, Jul 16, 2021 at 1:40 PM Mike Puchol <mike@starlink.sx> wrote:
>
>> If we understand shell as “group of satellites at a certain altitude
>> range”, there is not much point in linking between shells if you can link
>> within one shell and orbital plane, and that plane has at least one
>> satellite within range of a gateway. I could be proven wrong, but IMHO the
>> first generation of links are meant of intra-plane, and maybe at a stretch
>> cross-plane to the next plane East or West.
>>
>> The only way to eventually go is optical links to the ground too, as RF
>> will only get you so far. At that stage, every shell will have its own
>> optical links to the ground, with gateways placed in areas with little
>> average cloud cover.
>>
>> Best,
>>
>> Mike
>> On Jul 16, 2021, 23:30 +0200, David Lang <david@lang.hm>, wrote:
>>
>> at satellite distances, you need to adjust your vertical direction
>> depending on
>> how far away the satellite you are talking to is, even if it's at the same
>> altitude
>>
>> the difference between shells that are only a few KM apart is less than
>> the
>> angles that you could need to satellites in the same shell further away.
>>
>> David Lang
>>
>> On Fri, 16 Jul 2021, Mike Puchol wrote:
>>
>> Date: Fri, 16 Jul 2021 22:57:14 +0200
>> From: Mike Puchol <mike@starlink.sx>
>> To: David Lang <david@lang.hm>
>> Cc: Nathan Owens <nathan@nathan.io>,
>> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>> David P. Reed <dpreed@deepplum.com>
>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>
>> Correct. A mirror tracking head that turns around the perpendicular to
>> the satellite path allows you to track satellites in the same plane, in
>> front or behind, when they change altitude by a few kilometers as part of
>> orbital adjustments or collision avoidance. To have a fully gimbaled head
>> that can track any satellite in any direction (and at any relative
>> velocity!) is a totally different problem. I could see satellites linked to
>> the next longitudinal plane apart from those on the same plane, but
>> cross-plane when one is ascending and the other descending is way harder.
>> The next shells will be at lower altitudes, around 300-350km, and they have
>> also stated they want to go for higher shells at 1000+ km.
>>
>> Best,
>>
>> Mike
>> On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
>>
>> I expect the lasers to have 2d gimbles, which lets them track most things
>> in
>> their field of view. Remember that Starlink has compressed their orbital
>> planes,
>> they are going to be running almost everything in the 550km range
>> (500-600km
>> IIRC) and have almost entirely eliminated the ~1000km planes
>>
>> David Lang
>>
>> On Fri, 16 Jul 2021,
>> Mike Puchol wrote:
>>
>> Date: Fri, 16 Jul 2021 19:42:55 +0200
>> From: Mike Puchol <mike@starlink.sx>
>> To: David Lang <david@lang.hm>
>> Cc: Nathan Owens <nathan@nathan.io>,
>> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>> David P. Reed <dpreed@deepplum.com>
>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>
>> True, but we are then assuming that the optical links are a mesh between
>> satellites in the same plane, plus between planes. From an engineering
>> problem point of view, keeping optical links in-plane only makes the system
>> extremely simpler (no full FOV gimbals with the optical train in them, for
>> example), and it solves the issue, as it is highly likely that at least one
>> satellite in any given plane will be within reach of a gateway.
>>
>> Routing to an arbitrary gateway may involve passing via intermediate
>> gateways, ground segments, and even using terminals as a hopping point.
>>
>> Best,
>>
>> Mike
>> On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
>>
>> the speed of light in a vaccum is significantly better than the speed of
>> light
>> in fiber, so if you are doing a cross country hop, terminal -> sat -> sat
>> -> sat
>> -> ground station (especially if the ground station is in the target
>> datacenter)
>> can be faster than terminal -> sat -> ground station -> cross-country
>> fiber,
>> even accounting for the longer distance at 550km altitude than at ground
>> level.
>>
>> This has interesting implications for supplementing/replacing undersea
>> cables as
>> the sats over the ocean are not going to be heavily used, dedicated ground
>> stations could be setup that use sats further offshore than normal (and
>> are
>> shielded from sats over land) to leverage the system without interfering
>> significantly with more 'traditional' uses
>>
>> David Lang
>>
>> On Fri, 16 Jul 2021, Mike Puchol wrote:
>>
>> Date: Fri, 16 Jul 2021 19:31:37 +0200
>> From: Mike Puchol <mike@starlink.sx>
>> To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>> David P. Reed <dpreed@deepplum.com>
>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>
>> Satellite optical links are useful to extend coverage to areas where you
>> don’t have gateways - thus, they will introduce additional latency compared
>> to two space segment hops (terminal to satellite -> satellite to gateway).
>> If you have terminal to satellite, two optical hops, then final satellite
>> to gateway, you will have more latency, not less.
>>
>> We are being “sold” optical links for what they are not IMHO.
>>
>> Best,
>>
>> Mike
>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>
>> As there are more satellites, the up down time will get closer to 4-5ms
>> rather then the ~7ms you list
>>
>>
>> Possibly, if you do steering to always jump to the lowest latency
>> satellite.
>>
>> with laser relays in orbit, and terminal to terminal routing in orbit,
>> there is the potential for the theoretical minimum to tend lower
>>
>> Maybe for certain users really in the middle of nowhere, but I did the
>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>
>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>
>> hey, it's a good attitude to have :-)
>>
>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>> very
>> close to the goal, if not exceed it.
>>
>> As there are more staellites, the up down time will get closer to 4-5ms
>> rather
>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>> terminal
>> routing in orbit, there is the potential for the theoretical minimum to
>> tend
>> lower, giving some headroom for other overhead but still being in the 20ms
>> range.
>>
>> David Lang
>>
>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>
>> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
>> if you do some basic math:
>>
>>    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>    - Total one-way delay: 4.3 - 11.1ms
>>    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>
>> This includes no transmission delay, queuing delay,
>> processing/fragmentation/reassembly/etc, and no time-division
>> multiplexing.
>>
>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>
>> I think it depends on if you are looking at datacenter-to-datacenter
>> latency of
>> home to remote datacenter latency :-)
>>
>> my rule of thumb for cross US ping time has been 80-100ms latency (but
>> it's been
>> a few years since I tested it).
>>
>> I note that an article I saw today said that Elon is saying that latency
>> will
>> improve significantly in the near future, that up/down latency is ~20ms
>> and the
>> additional delays pushing it to the 80ms range are 'stupid packet routing'
>> problems that they are working on.
>>
>> If they are still in that level of optimization, it doesn't surprise me
>> that
>> they haven't really focused on the bufferbloat issue, they have more
>> obvious
>> stuff to fix first.
>>
>> David Lang
>>
>>
>>    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>
>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>
>> Hi David
>> In terms of the Latency that David (Reed) mentioned for California to
>>
>> Massachusetts of about 17ms over the public internet, it seems a bit
>> faster
>> than what I would expect. My own traceroute via my VDSL link shows 14ms
>> just to get out of the operator network.
>>
>>
>> https://www.wondernetwork.com  is a handy tool for checking geographic
>>
>> ping perf between cities, and it shows a min of about 66ms for pings
>> between Boston and San Diego
>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>> 1-way transfer).
>>
>>
>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>>
>> (through a pure fibre link of that distance) the propagation time is just
>> over 20ms. If the network equipment between the Boston and San Diego is
>> factored in, with some buffering along the way, 33ms does seem quite
>> reasonable over the 20ms for speed of light in fibre for that 1-way
>> transfer
>>
>>
>> -Ian Wheelock
>>
>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>
>> David Lang <david@lang.hm>
>>
>> Date: Friday 9 July 2021 at 23:59
>> To: "David P. Reed" <dpreed@deepplum.com>
>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>
>> IIRC, the definition of 'low latency' for the FCC was something like
>>
>> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
>> and worlds better than geostationary satellite (and many
>>
>> External (mailto:david@lang.hm)
>>
>>
>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>   https://www.inky.com/banner-faq/  https://www.inky.com
>>
>>
>> IIRC, the definition of 'low latency' for the FCC was something like
>>
>> 100ms, and
>>
>> Musk was predicting <40ms.
>>
>> roughly competitive with landlines, and worlds better than geostationary
>> satellite (and many wireless ISPs)
>>
>> but when doing any serious testing of latency, you need to be wired to
>>
>> the
>>
>> router, wifi introduces so much variability that it swamps the signal.
>>
>> David Lang
>>
>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>
>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>> From: David P. Reed <dpreed@deepplum.com>
>> To: starlink@lists.bufferbloat.net
>> Subject: [Starlink] Starlink and bufferbloat status?
>>
>>
>> Early measurements of performance of Starlink have shown significant
>>
>> bufferbloat, as Dave Taht has shown.
>>
>>
>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>
>> issue, it should be completely manageable, starting by simple firmware
>> changes inside the Starlink system itself. For example, implementing
>> fq_codel so that bottleneck links just drop packets according to the Best
>> Practices RFC,
>>
>>
>> So I'm hoping this has improved since Dave's measurements. How much has
>>
>> it improved? What's the current maximum packet latency under full
>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
>> times under full load*, but he wasn't using flent or some other
>> measurement
>> tool of good quality that gives a true number.
>>
>>
>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>
>> want latencies significantly less than 100 msec. as a rule of thumb for
>> teleconferencing quality). But it is better than Dave's measurements
>> showed.
>>
>>
>> Now Musk bragged that his network was "low latency" unlike other high
>>
>> speed services, which means low end-to-end latency.  That got him
>> permission from the FCC to operate Starlink at all. His number was, I
>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
>> probably meant just the time from the ground station to the terminal
>> through the satellite. But I regularly get 17 msec. between California and
>> Massachusetts over the public Internet)
>>
>>
>> So 84 might be the current status. That would mean that someone at
>>
>> Srarlink might be paying some attention, but it is a long way from what
>> Musk implied.
>>
>>
>>
>> PS: I forget the number of the RFC, but the number of packets queued on
>>
>> an egress link should be chosen by taking the hardware bottleneck
>> throughput of any path, combined with an end-to-end Internet underlying
>> delay of about 10 msec. to account for hops between source and
>> destination.
>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>> limited
>> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets
>> from
>> each terminal of buffering, total, in the path from terminal to public
>> Internet, assuming the connection to the public Internet is not a problem.
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>>
>>
>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> _______________________________________________
>>
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> --
>
>
> <https://preseem.com/>
>   <https://www.facebook.com/preseem/> <https://twitter.com/preseem>
> <https://www.youtube.com/channel/UCLGfpuWwXcimzpxK3IIDgzg/videos>
> <https://www.linkedin.com/company/4287641/>
>      Jeremy Austin
>
>       Sr. Product Manager
>
>       *Preseem | Aterlo Networks*
>
>       Book a Call: https://app.hubspot.com/meetings/jeremy548
>
>
>
>
> 1 833 773 7336 x718 <1+833+773+7336+718>
> jeremy@preseem.com
> preseem.com
>          <https://preseem.com/stay-connected/>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 31951 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 21:40                           ` Mike Puchol
  2021-07-16 22:40                             ` Jeremy Austin
@ 2021-07-17  1:12                             ` David Lang
       [not found]                             ` <d86d6590b6f24dfa8f9775ed3bb3206c@DM6PR05MB5915.namprd05.prod.outlook.com>
  2 siblings, 0 replies; 37+ messages in thread
From: David Lang @ 2021-07-17  1:12 UTC (permalink / raw)
  To: Mike Puchol; +Cc: David Lang, Nathan Owens, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 17924 bytes --]

If you limit your ground stations to areas with little cloud cover, you then 
have to use regular Internet to get to the servers, which will add far more 
latency.

Starlink is based on the idea of lots of cheap devices at close range rather 
than a few expensive devices that (because they are few) are at longer range

satellites in one shell can be far futher apart than the distance from one shell 
to another, and each shell is a sphere, so the angle from one point on the 
sphere to another is going to vary in both dimensions (remember, the satellites 
will do collision avoidance) so you cannot count on them being at the perfect 
angle in one dimension.

If you are needing to talk from the antartica to a system in an AWS datacenter 
in Virginia, your first hop is going to be a satellite in one of the polar 
shells, from there it can send the signal via laser to a satellite at a 
different altitude in one of the main shells (which may need to send it to 
another satellite, which may be at a different altitude/different shell ...) 
which then can send the signal down to a ground station on the roof of the AWS 
datacenter.

If they do have shells at higher altitudes, the satellites at higher altitudes 
can send the signal further without having the laser go through air, so a hop 
from a lo altitude shell to a higher altitude shell can save several hops 
through the low altitude shells (more significant as load goes up)

the wikipedia page lists  the details of the different shells that are planned

https://en.wikipedia.org/wiki/Starlink

a shell is all at roughly the same altitude and inclination

they have populated the first shell, and started on a second shell, in phase 2 
they are scheduled to start populating shells at lower altitudes (lower latency, 
better handling of dense areas, but the satellites won't last as long and won't 
have as long a horizon so more hops would be needed)

David Lang


On Fri, 16 Jul 2021, Mike Puchol wrote:

> If we understand shell as “group of satellites at a certain altitude range”, there is not much point in linking between shells if you can link within one shell and orbital plane, and that plane has at least one satellite within range of a gateway. I could be proven wrong, but IMHO the first generation of links are meant of intra-plane, and maybe at a stretch cross-plane to the next plane East or West.
>
> The only way to eventually go is optical links to the ground too, as RF will only get you so far. At that stage, every shell will have its own optical links to the ground, with gateways placed in areas with little average cloud cover.
>
> Best,
>
> Mike
> On Jul 16, 2021, 23:30 +0200, David Lang <david@lang.hm>, wrote:
>> at satellite distances, you need to adjust your vertical direction depending on
>> how far away the satellite you are talking to is, even if it's at the same
>> altitude
>>
>> the difference between shells that are only a few KM apart is less than the
>> angles that you could need to satellites in the same shell further away.
>>
>> David Lang
>>
>> On Fri, 16 Jul 2021, Mike Puchol wrote:
>>
>>> Date: Fri, 16 Jul 2021 22:57:14 +0200
>>> From: Mike Puchol <mike@starlink.sx>
>>> To: David Lang <david@lang.hm>
>>> Cc: Nathan Owens <nathan@nathan.io>,
>>> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>>> David P. Reed <dpreed@deepplum.com>
>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>
>>> Correct. A mirror tracking head that turns around the perpendicular to the satellite path allows you to track satellites in the same plane, in front or behind, when they change altitude by a few kilometers as part of orbital adjustments or collision avoidance. To have a fully gimbaled head that can track any satellite in any direction (and at any relative velocity!) is a totally different problem. I could see satellites linked to the next longitudinal plane apart from those on the same plane, but cross-plane when one is ascending and the other descending is way harder. The next shells will be at lower altitudes, around 300-350km, and they have also stated they want to go for higher shells at 1000+ km.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
>>>> I expect the lasers to have 2d gimbles, which lets them track most things in
>>>> their field of view. Remember that Starlink has compressed their orbital planes,
>>>> they are going to be running almost everything in the 550km range (500-600km
>>>> IIRC) and have almost entirely eliminated the ~1000km planes
>>>>
>>>> David Lang
>>>>
>>>> On Fri, 16 Jul 2021,
>>>> Mike Puchol wrote:
>>>>
>>>>> Date: Fri, 16 Jul 2021 19:42:55 +0200
>>>>> From: Mike Puchol <mike@starlink.sx>
>>>>> To: David Lang <david@lang.hm>
>>>>> Cc: Nathan Owens <nathan@nathan.io>,
>>>>> "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>>>>> David P. Reed <dpreed@deepplum.com>
>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>
>>>>> True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.
>>>>>
>>>>> Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.
>>>>>
>>>>> Best,
>>>>>
>>>>> Mike
>>>>> On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
>>>>>> the speed of light in a vaccum is significantly better than the speed of light
>>>>>> in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
>>>>>> -> ground station (especially if the ground station is in the target datacenter)
>>>>>> can be faster than terminal -> sat -> ground station -> cross-country fiber,
>>>>>> even accounting for the longer distance at 550km altitude than at ground level.
>>>>>>
>>>>>> This has interesting implications for supplementing/replacing undersea cables as
>>>>>> the sats over the ocean are not going to be heavily used, dedicated ground
>>>>>> stations could be setup that use sats further offshore than normal (and are
>>>>>> shielded from sats over land) to leverage the system without interfering
>>>>>> significantly with more 'traditional' uses
>>>>>>
>>>>>> David Lang
>>>>>>
>>>>>> On Fri, 16 Jul 2021, Mike Puchol wrote:
>>>>>>
>>>>>>> Date: Fri, 16 Jul 2021 19:31:37 +0200
>>>>>>> From: Mike Puchol <mike@starlink.sx>
>>>>>>> To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
>>>>>>> David P. Reed <dpreed@deepplum.com>
>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>
>>>>>>> Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
>>>>>>>
>>>>>>> We are being “sold” optical links for what they are not IMHO.
>>>>>>>
>>>>>>> Best,
>>>>>>>
>>>>>>> Mike
>>>>>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>>>>>>> As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
>>>>>>>>
>>>>>>>> Possibly, if you do steering to always jump to the lowest latency satellite.
>>>>>>>>
>>>>>>>>> with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
>>>>>>>> Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>>>>>>
>>>>>>>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>>>>>>>> hey, it's a good attitude to have :-)
>>>>>>>>>>
>>>>>>>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
>>>>>>>>>> close to the goal, if not exceed it.
>>>>>>>>>>
>>>>>>>>>> As there are more staellites, the up down time will get closer to 4-5ms rather
>>>>>>>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
>>>>>>>>>> routing in orbit, there is the potential for the theoretical minimum to tend
>>>>>>>>>> lower, giving some headroom for other overhead but still being in the 20ms
>>>>>>>>>> range.
>>>>>>>>>>
>>>>>>>>>> David Lang
>>>>>>>>>>
>>>>>>>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>>>>>>>
>>>>>>>>>>> Elon said "foolish packet routing" for things over 20ms! Which seems crazy
>>>>>>>>>>> if you do some basic math:
>>>>>>>>>>>
>>>>>>>>>>>    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>>>>>>    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>>>>>>    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>>>>>>>>>    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>>>>>>>>>    - Total one-way delay: 4.3 - 11.1ms
>>>>>>>>>>>    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>>>>>>>>>
>>>>>>>>>>> This includes no transmission delay, queuing delay,
>>>>>>>>>>> processing/fragmentation/reassembly/etc, and no time-division multiplexing.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I think it depends on if you are looking at datacenter-to-datacenter
>>>>>>>>>>>> latency of
>>>>>>>>>>>> home to remote datacenter latency :-)
>>>>>>>>>>>>
>>>>>>>>>>>> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>>>>>>>>>>> it's been
>>>>>>>>>>>> a few years since I tested it).
>>>>>>>>>>>>
>>>>>>>>>>>> I note that an article I saw today said that Elon is saying that latency
>>>>>>>>>>>> will
>>>>>>>>>>>> improve significantly in the near future, that up/down latency is ~20ms
>>>>>>>>>>>> and the
>>>>>>>>>>>> additional delays pushing it to the 80ms range are 'stupid packet routing'
>>>>>>>>>>>> problems that they are working on.
>>>>>>>>>>>>
>>>>>>>>>>>> If they are still in that level of optimization, it doesn't surprise me
>>>>>>>>>>>> that
>>>>>>>>>>>> they haven't really focused on the bufferbloat issue, they have more
>>>>>>>>>>>> obvious
>>>>>>>>>>>> stuff to fix first.
>>>>>>>>>>>>
>>>>>>>>>>>> David Lang
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>>>>>>>>>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>>>>>>>>>>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>>>>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hi David
>>>>>>>>>>>>> In terms of the Latency that David (Reed) mentioned for California to
>>>>>>>>>>>> Massachusetts of about 17ms over the public internet, it seems a bit faster
>>>>>>>>>>>> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>>>>>>>>>>> just to get out of the operator network.
>>>>>>>>>>>>>
>>>>>>>>>>>>> https://www.wondernetwork.com  is a handy tool for checking geographic
>>>>>>>>>>>> ping perf between cities, and it shows a min of about 66ms for pings
>>>>>>>>>>>> between Boston and San Diego
>>>>>>>>>>>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>>>>>>>>>>> 1-way transfer).
>>>>>>>>>>>>>
>>>>>>>>>>>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
>>>>>>>>>>>> (through a pure fibre link of that distance) the propagation time is just
>>>>>>>>>>>> over 20ms. If the network equipment between the Boston and San Diego is
>>>>>>>>>>>> factored in, with some buffering along the way, 33ms does seem quite
>>>>>>>>>>>> reasonable over the 20ms for speed of light in fibre for that 1-way transfer
>>>>>>>>>>>>>
>>>>>>>>>>>>> -Ian Wheelock
>>>>>>>>>>>>>
>>>>>>>>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>>>>>>>>>> David Lang <david@lang.hm>
>>>>>>>>>>>>> Date: Friday 9 July 2021 at 23:59
>>>>>>>>>>>>> To: "David P. Reed" <dpreed@deepplum.com>
>>>>>>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>>>>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>>>>
>>>>>>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>>>>>>> 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
>>>>>>>>>>>> and worlds better than geostationary satellite (and many
>>>>>>>>>>>>> External (mailto:david@lang.hm)
>>>>>>>>>>>>>
>>>>>>>>>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>>>>>>>>>>   https://www.inky.com/banner-faq/  https://www.inky.com
>>>>>>>>>>>>>
>>>>>>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>>>>>>> 100ms, and
>>>>>>>>>>>>> Musk was predicting <40ms.
>>>>>>>>>>>>>
>>>>>>>>>>>>> roughly competitive with landlines, and worlds better than geostationary
>>>>>>>>>>>>> satellite (and many wireless ISPs)
>>>>>>>>>>>>>
>>>>>>>>>>>>> but when doing any serious testing of latency, you need to be wired to
>>>>>>>>>>>> the
>>>>>>>>>>>>> router, wifi introduces so much variability that it swamps the signal.
>>>>>>>>>>>>>
>>>>>>>>>>>>> David Lang
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>>>>>>>>>>>> From: David P. Reed <dpreed@deepplum.com>
>>>>>>>>>>>>>> To: starlink@lists.bufferbloat.net
>>>>>>>>>>>>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Early measurements of performance of Starlink have shown significant
>>>>>>>>>>>> bufferbloat, as Dave Taht has shown.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>>>>>>>>>>> issue, it should be completely manageable, starting by simple firmware
>>>>>>>>>>>> changes inside the Starlink system itself. For example, implementing
>>>>>>>>>>>> fq_codel so that bottleneck links just drop packets according to the Best
>>>>>>>>>>>> Practices RFC,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So I'm hoping this has improved since Dave's measurements. How much has
>>>>>>>>>>>> it improved? What's the current maximum packet latency under full
>>>>>>>>>>>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
>>>>>>>>>>>> times under full load*, but he wasn't using flent or some other measurement
>>>>>>>>>>>> tool of good quality that gives a true number.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>>>>>>>>>>> want latencies significantly less than 100 msec. as a rule of thumb for
>>>>>>>>>>>> teleconferencing quality). But it is better than Dave's measurements showed.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Now Musk bragged that his network was "low latency" unlike other high
>>>>>>>>>>>> speed services, which means low end-to-end latency.  That got him
>>>>>>>>>>>> permission from the FCC to operate Starlink at all. His number was, I
>>>>>>>>>>>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
>>>>>>>>>>>> probably meant just the time from the ground station to the terminal
>>>>>>>>>>>> through the satellite. But I regularly get 17 msec. between California and
>>>>>>>>>>>> Massachusetts over the public Internet)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So 84 might be the current status. That would mean that someone at
>>>>>>>>>>>> Srarlink might be paying some attention, but it is a long way from what
>>>>>>>>>>>> Musk implied.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> PS: I forget the number of the RFC, but the number of packets queued on
>>>>>>>>>>>> an egress link should be chosen by taking the hardware bottleneck
>>>>>>>>>>>> throughput of any path, combined with an end-to-end Internet underlying
>>>>>>>>>>>> delay of about 10 msec. to account for hops between source and destination.
>>>>>>>>>>>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
>>>>>>>>>>>> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>>>>>>>>>>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
>>>>>>>>>>>> each terminal of buffering, total, in the path from terminal to public
>>>>>>>>>>>> Internet, assuming the connection to the public Internet is not a problem.
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>>>>>
>>>>>>>>>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Starlink mailing list
>>>>>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [Starlink] Free Space Optics - was Starlink and bufferbloat status?
  2021-07-16 23:04                               ` Nathan Owens
@ 2021-07-17 10:02                                 ` Michiel Leenaars
  0 siblings, 0 replies; 37+ messages in thread
From: Michiel Leenaars @ 2021-07-17 10:02 UTC (permalink / raw)
  To: starlink

[-- Attachment #1: Type: text/plain, Size: 604 bytes --]

Hi Mike,

>> Is there a safe and legal FSO mechanism that works over these distances
>> through atmosphere shell-to-ground? And the power requirements suitable for
>> a StarLink-sized single, solar-powered system?

I don't know if you are aware of the Koruza open hardware project:

http://www.koruza.net/specs/
http://www.koruza.net/scientific/

I'm not sure that is what you are looking for, but might be a good
starting point. I can connect you to Luka Mustafa, who is the lead of
this project - we have provided some grants to the project in its early
days.

Best,
Michiel Leenaars
NLnet foundation


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
       [not found]                             ` <d86d6590b6f24dfa8f9775ed3bb3206c@DM6PR05MB5915.namprd05.prod.outlook.com>
@ 2021-07-17 15:55                               ` Fabian E. Bustamante
  0 siblings, 0 replies; 37+ messages in thread
From: Fabian E. Bustamante @ 2021-07-17 15:55 UTC (permalink / raw)
  To: Mike Puchol, David Lang; +Cc: starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 23187 bytes --]

Maybe already mentioned but just in case, there are two HotNets papers on 2019 and 2020 looking at the idea of bent-pipe connectivity (up and down from sat to grind) as an alternative (third?) solution. The 2019 paper by Handley  suggest that with a dense-enough deployment of GTs you could achieve latencies comparable to constellations with ISL. The 2020 paper, with additional details in their analysis, shows this would come with higher, more variable latencies due to obvious things like weather, the need for GTs in unfriendly locations, etc.

fabian

---
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fabian E. Bustamante, Professor
Department of Computer Science | Northwestern U.
Lead Scientist @ PhenixRTS
http://www.aqualab.cs.northwestern.edu
twitter: @bustamantefe
On Jul 16, 2021, 8:13 PM -0500, David Lang <david@lang.hm>, wrote:
> If you limit your ground stations to areas with little cloud cover, you then
> have to use regular Internet to get to the servers, which will add far more
> latency.
>
> Starlink is based on the idea of lots of cheap devices at close range rather
> than a few expensive devices that (because they are few) are at longer range
>
> satellites in one shell can be far futher apart than the distance from one shell
> to another, and each shell is a sphere, so the angle from one point on the
> sphere to another is going to vary in both dimensions (remember, the satellites
> will do collision avoidance) so you cannot count on them being at the perfect
> angle in one dimension.
>
> If you are needing to talk from the antartica to a system in an AWS datacenter
> in Virginia, your first hop is going to be a satellite in one of the polar
> shells, from there it can send the signal via laser to a satellite at a
> different altitude in one of the main shells (which may need to send it to
> another satellite, which may be at a different altitude/different shell ...)
> which then can send the signal down to a ground station on the roof of the AWS
> datacenter.
>
> If they do have shells at higher altitudes, the satellites at higher altitudes
> can send the signal further without having the laser go through air, so a hop
> from a lo altitude shell to a higher altitude shell can save several hops
> through the low altitude shells (more significant as load goes up)
>
> the wikipedia page lists the details of the different shells that are planned
>
> https://urldefense.com/v3/__https://en.wikipedia.org/wiki/Starlink__;!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7D9SDmTDw$
>
> a shell is all at roughly the same altitude and inclination
>
> they have populated the first shell, and started on a second shell, in phase 2
> they are scheduled to start populating shells at lower altitudes (lower latency,
> better handling of dense areas, but the satellites won't last as long and won't
> have as long a horizon so more hops would be needed)
>
> David Lang
>
>
> On Fri, 16 Jul 2021, Mike Puchol wrote:
>
> > If we understand shell as “group of satellites at a certain altitude range”, there is not much point in linking between shells if you can link within one shell and orbital plane, and that plane has at least one satellite within range of a gateway. I could be proven wrong, but IMHO the first generation of links are meant of intra-plane, and maybe at a stretch cross-plane to the next plane East or West.
> >
> > The only way to eventually go is optical links to the ground too, as RF will only get you so far. At that stage, every shell will have its own optical links to the ground, with gateways placed in areas with little average cloud cover.
> >
> > Best,
> >
> > Mike
> > On Jul 16, 2021, 23:30 +0200, David Lang <david@lang.hm>, wrote:
> > > at satellite distances, you need to adjust your vertical direction depending on
> > > how far away the satellite you are talking to is, even if it's at the same
> > > altitude
> > >
> > > the difference between shells that are only a few KM apart is less than the
> > > angles that you could need to satellites in the same shell further away.
> > >
> > > David Lang
> > >
> > > On Fri, 16 Jul 2021, Mike Puchol wrote:
> > >
> > > > Date: Fri, 16 Jul 2021 22:57:14 +0200
> > > > From: Mike Puchol <mike@starlink.sx>
> > > > To: David Lang <david@lang.hm>
> > > > Cc: Nathan Owens <nathan@nathan.io>,
> > > > "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > > > David P. Reed <dpreed@deepplum.com>
> > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > >
> > > > Correct. A mirror tracking head that turns around the perpendicular to the satellite path allows you to track satellites in the same plane, in front or behind, when they change altitude by a few kilometers as part of orbital adjustments or collision avoidance. To have a fully gimbaled head that can track any satellite in any direction (and at any relative velocity!) is a totally different problem. I could see satellites linked to the next longitudinal plane apart from those on the same plane, but cross-plane when one is ascending and the other descending is way harder. The next shells will be at lower altitudes, around 300-350km, and they have also stated they want to go for higher shells at 1000+ km.
> > > >
> > > > Best,
> > > >
> > > > Mike
> > > > On Jul 16, 2021, 20:48 +0200, David Lang <david@lang.hm>, wrote:
> > > > > I expect the lasers to have 2d gimbles, which lets them track most things in
> > > > > their field of view. Remember that Starlink has compressed their orbital planes,
> > > > > they are going to be running almost everything in the 550km range (500-600km
> > > > > IIRC) and have almost entirely eliminated the ~1000km planes
> > > > >
> > > > > David Lang
> > > > >
> > > > > On Fri, 16 Jul 2021,
> > > > > Mike Puchol wrote:
> > > > >
> > > > > > Date: Fri, 16 Jul 2021 19:42:55 +0200
> > > > > > From: Mike Puchol <mike@starlink.sx>
> > > > > > To: David Lang <david@lang.hm>
> > > > > > Cc: Nathan Owens <nathan@nathan.io>,
> > > > > > "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > > > > > David P. Reed <dpreed@deepplum.com>
> > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > >
> > > > > > True, but we are then assuming that the optical links are a mesh between satellites in the same plane, plus between planes. From an engineering problem point of view, keeping optical links in-plane only makes the system extremely simpler (no full FOV gimbals with the optical train in them, for example), and it solves the issue, as it is highly likely that at least one satellite in any given plane will be within reach of a gateway.
> > > > > >
> > > > > > Routing to an arbitrary gateway may involve passing via intermediate gateways, ground segments, and even using terminals as a hopping point.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > Mike
> > > > > > On Jul 16, 2021, 19:38 +0200, David Lang <david@lang.hm>, wrote:
> > > > > > > the speed of light in a vaccum is significantly better than the speed of light
> > > > > > > in fiber, so if you are doing a cross country hop, terminal -> sat -> sat -> sat
> > > > > > > -> ground station (especially if the ground station is in the target datacenter)
> > > > > > > can be faster than terminal -> sat -> ground station -> cross-country fiber,
> > > > > > > even accounting for the longer distance at 550km altitude than at ground level.
> > > > > > >
> > > > > > > This has interesting implications for supplementing/replacing undersea cables as
> > > > > > > the sats over the ocean are not going to be heavily used, dedicated ground
> > > > > > > stations could be setup that use sats further offshore than normal (and are
> > > > > > > shielded from sats over land) to leverage the system without interfering
> > > > > > > significantly with more 'traditional' uses
> > > > > > >
> > > > > > > David Lang
> > > > > > >
> > > > > > > On Fri, 16 Jul 2021, Mike Puchol wrote:
> > > > > > >
> > > > > > > > Date: Fri, 16 Jul 2021 19:31:37 +0200
> > > > > > > > From: Mike Puchol <mike@starlink.sx>
> > > > > > > > To: David Lang <david@lang.hm>, Nathan Owens <nathan@nathan.io>
> > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>,
> > > > > > > > David P. Reed <dpreed@deepplum.com>
> > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > >
> > > > > > > > Satellite optical links are useful to extend coverage to areas where you don’t have gateways - thus, they will introduce additional latency compared to two space segment hops (terminal to satellite -> satellite to gateway). If you have terminal to satellite, two optical hops, then final satellite to gateway, you will have more latency, not less.
> > > > > > > >
> > > > > > > > We are being “sold” optical links for what they are not IMHO.
> > > > > > > >
> > > > > > > > Best,
> > > > > > > >
> > > > > > > > Mike
> > > > > > > > On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> > > > > > > > > > As there are more satellites, the up down time will get closer to 4-5ms rather then the ~7ms you list
> > > > > > > > >
> > > > > > > > > Possibly, if you do steering to always jump to the lowest latency satellite.
> > > > > > > > >
> > > > > > > > > > with laser relays in orbit, and terminal to terminal routing in orbit, there is the potential for the theoretical minimum to tend lower
> > > > > > > > > Maybe for certain users really in the middle of nowhere, but I did the best-case math for "bent pipe" in Seattle area, which is as good as it gets.
> > > > > > > > >
> > > > > > > > > > On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
> > > > > > > > > > > hey, it's a good attitude to have :-)
> > > > > > > > > > >
> > > > > > > > > > > Elon tends to set 'impossible' goals, miss the timeline a bit, and come very
> > > > > > > > > > > close to the goal, if not exceed it.
> > > > > > > > > > >
> > > > > > > > > > > As there are more staellites, the up down time will get closer to 4-5ms rather
> > > > > > > > > > > then the ~7ms you list, and with laser relays in orbit, and terminal to terminal
> > > > > > > > > > > routing in orbit, there is the potential for the theoretical minimum to tend
> > > > > > > > > > > lower, giving some headroom for other overhead but still being in the 20ms
> > > > > > > > > > > range.
> > > > > > > > > > >
> > > > > > > > > > > David Lang
> > > > > > > > > > >
> > > > > > > > > > >   On Fri, 16 Jul 2021, Nathan Owens wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Elon said "foolish packet routing" for things over 20ms! Which seems crazy
> > > > > > > > > > > > if you do some basic math:
> > > > > > > > > > > >
> > > > > > > > > > > >    - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > > > > > > > >    - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> > > > > > > > > > > >    - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> > > > > > > > > > > >    - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> > > > > > > > > > > >    - Total one-way delay: 4.3 - 11.1ms
> > > > > > > > > > > >    - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> > > > > > > > > > > >
> > > > > > > > > > > > This includes no transmission delay, queuing delay,
> > > > > > > > > > > > processing/fragmentation/reassembly/etc, and no time-division multiplexing.
> > > > > > > > > > > >
> > > > > > > > > > > > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > I think it depends on if you are looking at datacenter-to-datacenter
> > > > > > > > > > > > > latency of
> > > > > > > > > > > > > home to remote datacenter latency :-)
> > > > > > > > > > > > >
> > > > > > > > > > > > > my rule of thumb for cross US ping time has been 80-100ms latency (but
> > > > > > > > > > > > > it's been
> > > > > > > > > > > > > a few years since I tested it).
> > > > > > > > > > > > >
> > > > > > > > > > > > > I note that an article I saw today said that Elon is saying that latency
> > > > > > > > > > > > > will
> > > > > > > > > > > > > improve significantly in the near future, that up/down latency is ~20ms
> > > > > > > > > > > > > and the
> > > > > > > > > > > > > additional delays pushing it to the 80ms range are 'stupid packet routing'
> > > > > > > > > > > > > problems that they are working on.
> > > > > > > > > > > > >
> > > > > > > > > > > > > If they are still in that level of optimization, it doesn't surprise me
> > > > > > > > > > > > > that
> > > > > > > > > > > > > they haven't really focused on the bufferbloat issue, they have more
> > > > > > > > > > > > > obvious
> > > > > > > > > > > > > stuff to fix first.
> > > > > > > > > > > > >
> > > > > > > > > > > > > David Lang
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >    On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Date: Fri, 16 Jul 2021 10:21:52 +0000
> > > > > > > > > > > > > > From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> > > > > > > > > > > > > > To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
> > > > > > > > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Hi David
> > > > > > > > > > > > > > In terms of the Latency that David (Reed) mentioned for California to
> > > > > > > > > > > > > Massachusetts of about 17ms over the public internet, it seems a bit faster
> > > > > > > > > > > > > than what I would expect. My own traceroute via my VDSL link shows 14ms
> > > > > > > > > > > > > just to get out of the operator network.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > https://urldefense.com/v3/__https://www.wondernetwork.com__;!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7BGSmtMvg$   is a handy tool for checking geographic
> > > > > > > > > > > > > ping perf between cities, and it shows a min of about 66ms for pings
> > > > > > > > > > > > > between Boston and San Diego
> > > > > > > > > > > > > https://urldefense.com/v3/__https://wondernetwork.com/pings/boston/San*20Diego__;JQ!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7BfFfHabQ$ (so about 33ms for
> > > > > > > > > > > > > 1-way transfer).
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of light
> > > > > > > > > > > > > (through a pure fibre link of that distance) the propagation time is just
> > > > > > > > > > > > > over 20ms. If the network equipment between the Boston and San Diego is
> > > > > > > > > > > > > factored in, with some buffering along the way, 33ms does seem quite
> > > > > > > > > > > > > reasonable over the 20ms for speed of light in fibre for that 1-way transfer
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > -Ian Wheelock
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
> > > > > > > > > > > > > David Lang <david@lang.hm>
> > > > > > > > > > > > > > Date: Friday 9 July 2021 at 23:59
> > > > > > > > > > > > > > To: "David P. Reed" <dpreed@deepplum.com>
> > > > > > > > > > > > > > Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
> > > > > > > > > > > > > > Subject: Re: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > > > > > > > 100ms, and Musk was predicting <40ms. roughly competitive with landlines,
> > > > > > > > > > > > > and worlds better than geostationary satellite (and many
> > > > > > > > > > > > > > External (mailto:david@lang.hm)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > https://urldefense.com/v3/__https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=*key=19e8545676e28e577c813de83a4cf1dc__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7AbUggbLA$
> > > > > > > > > > > > >   https://urldefense.com/v3/__https://www.inky.com/banner-faq/__;!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7Dcy2JKhA$   https://urldefense.com/v3/__https://www.inky.com__;!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7AF3-h2-w$
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > IIRC, the definition of 'low latency' for the FCC was something like
> > > > > > > > > > > > > 100ms, and
> > > > > > > > > > > > > > Musk was predicting <40ms.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > roughly competitive with landlines, and worlds better than geostationary
> > > > > > > > > > > > > > satellite (and many wireless ISPs)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > but when doing any serious testing of latency, you need to be wired to
> > > > > > > > > > > > > the
> > > > > > > > > > > > > > router, wifi introduces so much variability that it swamps the signal.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > David Lang
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On Fri, 9 Jul 2021, David P. Reed wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> > > > > > > > > > > > > > > From: David P. Reed <dpreed@deepplum.com>
> > > > > > > > > > > > > > > To: starlink@lists.bufferbloat.net
> > > > > > > > > > > > > > > Subject: [Starlink] Starlink and bufferbloat status?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Early measurements of performance of Starlink have shown significant
> > > > > > > > > > > > > bufferbloat, as Dave Taht has shown.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > But...  Starlink is a moving target. The bufferbloat isn't a hardware
> > > > > > > > > > > > > issue, it should be completely manageable, starting by simple firmware
> > > > > > > > > > > > > changes inside the Starlink system itself. For example, implementing
> > > > > > > > > > > > > fq_codel so that bottleneck links just drop packets according to the Best
> > > > > > > > > > > > > Practices RFC,
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > So I'm hoping this has improved since Dave's measurements. How much has
> > > > > > > > > > > > > it improved? What's the current maximum packet latency under full
> > > > > > > > > > > > > load,  Ive heard anecdotally that a friend of a friend gets 84 msec. *ping
> > > > > > > > > > > > > times under full load*, but he wasn't using flent or some other measurement
> > > > > > > > > > > > > tool of good quality that gives a true number.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > 84 msec is not great - it's marginal for Zoom quality experience (you
> > > > > > > > > > > > > want latencies significantly less than 100 msec. as a rule of thumb for
> > > > > > > > > > > > > teleconferencing quality). But it is better than Dave's measurements showed.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Now Musk bragged that his network was "low latency" unlike other high
> > > > > > > > > > > > > speed services, which means low end-to-end latency.  That got him
> > > > > > > > > > > > > permission from the FCC to operate Starlink at all. His number was, I
> > > > > > > > > > > > > think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because he
> > > > > > > > > > > > > probably meant just the time from the ground station to the terminal
> > > > > > > > > > > > > through the satellite. But I regularly get 17 msec. between California and
> > > > > > > > > > > > > Massachusetts over the public Internet)
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > So 84 might be the current status. That would mean that someone at
> > > > > > > > > > > > > Srarlink might be paying some attention, but it is a long way from what
> > > > > > > > > > > > > Musk implied.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > PS: I forget the number of the RFC, but the number of packets queued on
> > > > > > > > > > > > > an egress link should be chosen by taking the hardware bottleneck
> > > > > > > > > > > > > throughput of any path, combined with an end-to-end Internet underlying
> > > > > > > > > > > > > delay of about 10 msec. to account for hops between source and destination.
> > > > > > > > > > > > > Lets say Starlink allocates 50 Mb/sec to each customer, packets are limited
> > > > > > > > > > > > > to 10,000 bits (1500 * 8), so the outbound queues should be limited to
> > > > > > > > > > > > > about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets from
> > > > > > > > > > > > > each terminal of buffering, total, in the path from terminal to public
> > > > > > > > > > > > > Internet, assuming the connection to the public Internet is not a problem.
> > > > > > > > > > > > > > _______________________________________________
> > > > > > > > > > > > > > Starlink mailing list
> > > > > > > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > > > > > >
> > > > > > > > > > > > > https://urldefense.com/v3/__https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https:/*lists.bufferbloat.net/listinfo/starlink__;Lw!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7C1Ug-UFw$
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > _______________________________________________
> > > > > > > > > > > > > Starlink mailing list
> > > > > > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > > > > > https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7CR8V3Ecg$
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > _______________________________________________
> > > > > > > > > Starlink mailing list
> > > > > > > > > Starlink@lists.bufferbloat.net
> > > > > > > > > https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/starlink__;!!Dq0X2DkFhyF93HkjWTBQKhk!Cyoz1r9ywvQvHilpxb1Yt4qmJh0Y0-rUJhH-QUV0z9jqTWXnL0EC_BY_wX65z7CR8V3Ecg$

[-- Attachment #2: Type: text/html, Size: 22216 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:35                 ` Nathan Owens
  2021-07-16 17:39                   ` Jonathan Bennett
@ 2021-07-17 18:36                   ` David P. Reed
  2021-07-17 18:42                     ` David Lang
  2021-07-18 19:05                     ` David Lang
  1 sibling, 2 replies; 37+ messages in thread
From: David P. Reed @ 2021-07-17 18:36 UTC (permalink / raw)
  To: Nathan Owens; +Cc: Mike Puchol, David Lang, starlink

Just to clarify, Starlink does NOT do inter-satellite data transfer. Each satellite is a "bent-pipe" which just reflects data transmitted to it back down.

NY-Tokyo isn't line of sight, even at LEO. This is why GEO satellites are used.

Iridium had the ability to inter-satellite routing. However, still, multiple hops are needed, and tracking all the other satellites in view is not easy, and stable routing management would be non-trivial indeed. (not counting the much lower power/bit available between battery powered satellites that aren't in the sun half the time. That was Iridium's basic technical issue - battery storage. 60 satellites were barely tractable for routing.

There seems to be a lot of imagination by the credulous technical community going into dreaming about what Starlink actually can achieve. Those who have actually worked with satellite systems know there is no magical genius that Musk has. He just launches lots of simple orbiting "mirrors" (active repeaters) close to the Earth's surface. Pragmatic engineering, exploiting better digital signal processing, lower power digital systems, better antenna systems that use MIMO (or phased array, or whatever you want to call it).


The reason it is all so hush-hush, to this old engineer, anyway, is the usual attempt to exploit the Wizard of Oz effect. (Trade secrets don't work well, so Musk would have filed patents worldwide if he had any novel special engineering magic). Musk is great at exploiting publicity to get people to think there's an all-powerful wizard behind the scenes. There isn't. This is like the belief that there is lots of highly classified technology that scientists don't understand unless cleared. There really isn't that much - the classification is about hiding all the weaknesses in military systems. A good thing, but not proof of mysterious magical science beyond commercial practice. Star Wars was this kind of exploitation of Magical Thinking in the public. A marketing blitz to con people who projected their imagination on a fairly mundane engineering project with lots of weaknesses.

Now I'm happy to see Musk lose money hand over fist to show us how bent-pipe LEO systems can work somewhat reliably. What else should a billionaire do to make his money useful? We learn stuff that way. And issues like bufferbloat get rediscovered, and fixed, if his team pays attention. This is good in the long run.

But customers who bought Tesla's expecting Autopilot to become self-driving? Or people who bought Tesla stock thinking no other companies could compete? They are waiting for something that is not real.

On Friday, July 16, 2021 1:35pm, "Nathan Owens" <nathan@nathan.io> said:

> The other case where they could provide benefit is very long distance paths
> --- NY to Tokyo, Johannesburg to London, etc... but presumably at high
> cost, as the capacity will likely be much lower than submarine cables.
> 
> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:
> 
>> Satellite optical links are useful to extend coverage to areas where you
>> don’t have gateways - thus, they will introduce additional latency
>> compared
>> to two space segment hops (terminal to satellite -> satellite to gateway).
>> If you have terminal to satellite, two optical hops, then final satellite
>> to gateway, you will have more latency, not less.
>>
>> We are being “sold” optical links for what they are not IMHO.
>>
>> Best,
>>
>> Mike
>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>
>> > As there are more satellites, the up down time will get closer to 4-5ms
>> rather then the ~7ms you list
>>
>> Possibly, if you do steering to always jump to the lowest latency
>> satellite.
>>
>> > with laser relays in orbit, and terminal to terminal routing in orbit,
>> there is the potential for the theoretical minimum to tend lower
>> Maybe for certain users really in the middle of nowhere, but I did the
>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>
>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>
>>> hey, it's a good attitude to have :-)
>>>
>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>> very
>>> close to the goal, if not exceed it.
>>>
>>> As there are more staellites, the up down time will get closer to 4-5ms
>>> rather
>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>> terminal
>>> routing in orbit, there is the potential for the theoretical minimum to
>>> tend
>>> lower, giving some headroom for other overhead but still being in the 20ms
>>> range.
>>>
>>> David Lang
>>>
>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>
>>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>>> crazy
>>> > if you do some basic math:
>>> >
>>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>> >   - Total one-way delay: 4.3 - 11.1ms
>>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>> >
>>> > This includes no transmission delay, queuing delay,
>>> > processing/fragmentation/reassembly/etc, and no time-division
>>> multiplexing.
>>> >
>>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>> >
>>> >> I think it depends on if you are looking at datacenter-to-datacenter
>>> >> latency of
>>> >> home to remote datacenter latency :-)
>>> >>
>>> >> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>> >> it's been
>>> >> a few years since I tested it).
>>> >>
>>> >> I note that an article I saw today said that Elon is saying that
>>> latency
>>> >> will
>>> >> improve significantly in the near future, that up/down latency is ~20ms
>>> >> and the
>>> >> additional delays pushing it to the 80ms range are 'stupid packet
>>> routing'
>>> >> problems that they are working on.
>>> >>
>>> >> If they are still in that level of optimization, it doesn't surprise me
>>> >> that
>>> >> they haven't really focused on the bufferbloat issue, they have more
>>> >> obvious
>>> >> stuff to fix first.
>>> >>
>>> >> David Lang
>>> >>
>>> >>
>>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>> >>
>>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>> >>>
>>> >>> Hi David
>>> >>> In terms of the Latency that David (Reed) mentioned for California to
>>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>>> faster
>>> >> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>> >> just to get out of the operator network.
>>> >>>
>>> >>> https://www.wondernetwork.com  is a handy tool for checking
>>> geographic
>>> >> ping perf between cities, and it shows a min of about 66ms for pings
>>> >> between Boston and San Diego
>>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>> >> 1-way transfer).
>>> >>>
>>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>> light
>>> >> (through a pure fibre link of that distance) the propagation time is
>>> just
>>> >> over 20ms. If the network equipment between the Boston and San Diego is
>>> >> factored in, with some buffering along the way, 33ms does seem quite
>>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>>> transfer
>>> >>>
>>> >>> -Ian Wheelock
>>> >>>
>>> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>> >> David Lang <david@lang.hm>
>>> >>> Date: Friday 9 July 2021 at 23:59
>>> >>> To: "David P. Reed" <dpreed@deepplum.com>
>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>> >>>
>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>>> landlines,
>>> >> and worlds better than geostationary satellite (and many
>>> >>> External (mailto:david@lang.hm)
>>> >>>
>>> >>
>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>>> >>>
>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>> >> 100ms, and
>>> >>> Musk was predicting <40ms.
>>> >>>
>>> >>> roughly competitive with landlines, and worlds better than
>>> geostationary
>>> >>> satellite (and many wireless ISPs)
>>> >>>
>>> >>> but when doing any serious testing of latency, you need to be wired to
>>> >> the
>>> >>> router, wifi introduces so much variability that it swamps the signal.
>>> >>>
>>> >>> David Lang
>>> >>>
>>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>> >>>
>>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>> >>>> From: David P. Reed <dpreed@deepplum.com>
>>> >>>> To: starlink@lists.bufferbloat.net
>>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>>> >>>>
>>> >>>>
>>> >>>> Early measurements of performance of Starlink have shown significant
>>> >> bufferbloat, as Dave Taht has shown.
>>> >>>>
>>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>> >> issue, it should be completely manageable, starting by simple firmware
>>> >> changes inside the Starlink system itself. For example, implementing
>>> >> fq_codel so that bottleneck links just drop packets according to the
>>> Best
>>> >> Practices RFC,
>>> >>>>
>>> >>>> So I'm hoping this has improved since Dave's measurements. How much
>>> has
>>> >> it improved? What's the current maximum packet latency under full
>>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>> *ping
>>> >> times under full load*, but he wasn't using flent or some other
>>> measurement
>>> >> tool of good quality that gives a true number.
>>> >>>>
>>> >>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>> >> want latencies significantly less than 100 msec. as a rule of thumb for
>>> >> teleconferencing quality). But it is better than Dave's measurements
>>> showed.
>>> >>>>
>>> >>>> Now Musk bragged that his network was "low latency" unlike other high
>>> >> speed services, which means low end-to-end latency.  That got him
>>> >> permission from the FCC to operate Starlink at all. His number was, I
>>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because
>>> he
>>> >> probably meant just the time from the ground station to the terminal
>>> >> through the satellite. But I regularly get 17 msec. between California
>>> and
>>> >> Massachusetts over the public Internet)
>>> >>>>
>>> >>>> So 84 might be the current status. That would mean that someone at
>>> >> Srarlink might be paying some attention, but it is a long way from what
>>> >> Musk implied.
>>> >>>>
>>> >>>>
>>> >>>> PS: I forget the number of the RFC, but the number of packets queued
>>> on
>>> >> an egress link should be chosen by taking the hardware bottleneck
>>> >> throughput of any path, combined with an end-to-end Internet underlying
>>> >> delay of about 10 msec. to account for hops between source and
>>> destination.
>>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>> limited
>>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets
>>> from
>>> >> each terminal of buffering, total, in the path from terminal to public
>>> >> Internet, assuming the connection to the public Internet is not a
>>> problem.
>>> >>> _______________________________________________
>>> >>> Starlink mailing list
>>> >>> Starlink@lists.bufferbloat.net
>>> >>>
>>> >>
>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>> >>>
>>> >>> _______________________________________________
>>> >> Starlink mailing list
>>> >> Starlink@lists.bufferbloat.net
>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>> >>
>>> >
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>>
> 



^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-17 18:36                   ` David P. Reed
@ 2021-07-17 18:42                     ` David Lang
  2021-07-18 19:05                     ` David Lang
  1 sibling, 0 replies; 37+ messages in thread
From: David Lang @ 2021-07-17 18:42 UTC (permalink / raw)
  To: David P. Reed; +Cc: Nathan Owens, Mike Puchol, David Lang, starlink

[-- Attachment #1: Type: text/plain, Size: 13593 bytes --]

On Sat, 17 Jul 2021, David P. Reed wrote:

> Just to clarify, Starlink does NOT do inter-satellite data transfer. Each satellite is a "bent-pipe" which just reflects data transmitted to it back down.

this may be the case for the first satellites, but the currently launched 
satellites have lasers for inter-satellite data transfer (started with the polar 
launches IIRC, and Elon has said that all satellites launched past mid-2021 will 
have lasers)

I believe that even the non-laser versions have radio links not aimed at the 
ground.

David Lang

> NY-Tokyo isn't line of sight, even at LEO. This is why GEO satellites are used.
>
> Iridium had the ability to inter-satellite routing. However, still, multiple hops are needed, and tracking all the other satellites in view is not easy, and stable routing management would be non-trivial indeed. (not counting the much lower power/bit available between battery powered satellites that aren't in the sun half the time. That was Iridium's basic technical issue - battery storage. 60 satellites were barely tractable for routing.
>
> There seems to be a lot of imagination by the credulous technical community going into dreaming about what Starlink actually can achieve. Those who have actually worked with satellite systems know there is no magical genius that Musk has. He just launches lots of simple orbiting "mirrors" (active repeaters) close to the Earth's surface. Pragmatic engineering, exploiting better digital signal processing, lower power digital systems, better antenna systems that use MIMO (or phased array, or whatever you want to call it).
>
>
> The reason it is all so hush-hush, to this old engineer, anyway, is the usual attempt to exploit the Wizard of Oz effect. (Trade secrets don't work well, so Musk would have filed patents worldwide if he had any novel special engineering magic). Musk is great at exploiting publicity to get people to think there's an all-powerful wizard behind the scenes. There isn't. This is like the belief that there is lots of highly classified technology that scientists don't understand unless cleared. There really isn't that much - the classification is about hiding all the weaknesses in military systems. A good thing, but not proof of mysterious magical science beyond commercial practice. Star Wars was this kind of exploitation of Magical Thinking in the public. A marketing blitz to con people who projected their imagination on a fairly mundane engineering project with lots of weaknesses.
>
> Now I'm happy to see Musk lose money hand over fist to show us how bent-pipe LEO systems can work somewhat reliably. What else should a billionaire do to make his money useful? We learn stuff that way. And issues like bufferbloat get rediscovered, and fixed, if his team pays attention. This is good in the long run.
>
> But customers who bought Tesla's expecting Autopilot to become self-driving? Or people who bought Tesla stock thinking no other companies could compete? They are waiting for something that is not real.
>
> On Friday, July 16, 2021 1:35pm, "Nathan Owens" <nathan@nathan.io> said:
>
>> The other case where they could provide benefit is very long distance paths
>> --- NY to Tokyo, Johannesburg to London, etc... but presumably at high
>> cost, as the capacity will likely be much lower than submarine cables.
>> 
>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:
>> 
>>> Satellite optical links are useful to extend coverage to areas where you
>>> don’t have gateways - thus, they will introduce additional latency
>>> compared
>>> to two space segment hops (terminal to satellite -> satellite to gateway).
>>> If you have terminal to satellite, two optical hops, then final satellite
>>> to gateway, you will have more latency, not less.
>>>
>>> We are being “sold” optical links for what they are not IMHO.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>
>>> > As there are more satellites, the up down time will get closer to 4-5ms
>>> rather then the ~7ms you list
>>>
>>> Possibly, if you do steering to always jump to the lowest latency
>>> satellite.
>>>
>>> > with laser relays in orbit, and terminal to terminal routing in orbit,
>>> there is the potential for the theoretical minimum to tend lower
>>> Maybe for certain users really in the middle of nowhere, but I did the
>>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>
>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>
>>>> hey, it's a good attitude to have :-)
>>>>
>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>>> very
>>>> close to the goal, if not exceed it.
>>>>
>>>> As there are more staellites, the up down time will get closer to 4-5ms
>>>> rather
>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>>> terminal
>>>> routing in orbit, there is the potential for the theoretical minimum to
>>>> tend
>>>> lower, giving some headroom for other overhead but still being in the 20ms
>>>> range.
>>>>
>>>> David Lang
>>>>
>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>
>>>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>>>> crazy
>>>> > if you do some basic math:
>>>> >
>>>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>> >   - Total one-way delay: 4.3 - 11.1ms
>>>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>> >
>>>> > This includes no transmission delay, queuing delay,
>>>> > processing/fragmentation/reassembly/etc, and no time-division
>>>> multiplexing.
>>>> >
>>>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>> >
>>>> >> I think it depends on if you are looking at datacenter-to-datacenter
>>>> >> latency of
>>>> >> home to remote datacenter latency :-)
>>>> >>
>>>> >> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>>> >> it's been
>>>> >> a few years since I tested it).
>>>> >>
>>>> >> I note that an article I saw today said that Elon is saying that
>>>> latency
>>>> >> will
>>>> >> improve significantly in the near future, that up/down latency is ~20ms
>>>> >> and the
>>>> >> additional delays pushing it to the 80ms range are 'stupid packet
>>>> routing'
>>>> >> problems that they are working on.
>>>> >>
>>>> >> If they are still in that level of optimization, it doesn't surprise me
>>>> >> that
>>>> >> they haven't really focused on the bufferbloat issue, they have more
>>>> >> obvious
>>>> >> stuff to fix first.
>>>> >>
>>>> >> David Lang
>>>> >>
>>>> >>
>>>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>> >>
>>>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> Hi David
>>>> >>> In terms of the Latency that David (Reed) mentioned for California to
>>>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>>>> faster
>>>> >> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>>> >> just to get out of the operator network.
>>>> >>>
>>>> >>> https://www.wondernetwork.com  is a handy tool for checking
>>>> geographic
>>>> >> ping perf between cities, and it shows a min of about 66ms for pings
>>>> >> between Boston and San Diego
>>>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>>> >> 1-way transfer).
>>>> >>>
>>>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>>> light
>>>> >> (through a pure fibre link of that distance) the propagation time is
>>>> just
>>>> >> over 20ms. If the network equipment between the Boston and San Diego is
>>>> >> factored in, with some buffering along the way, 33ms does seem quite
>>>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>>>> transfer
>>>> >>>
>>>> >>> -Ian Wheelock
>>>> >>>
>>>> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>> >> David Lang <david@lang.hm>
>>>> >>> Date: Friday 9 July 2021 at 23:59
>>>> >>> To: "David P. Reed" <dpreed@deepplum.com>
>>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>>>> landlines,
>>>> >> and worlds better than geostationary satellite (and many
>>>> >>> External (mailto:david@lang.hm)
>>>> >>>
>>>> >>
>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and
>>>> >>> Musk was predicting <40ms.
>>>> >>>
>>>> >>> roughly competitive with landlines, and worlds better than
>>>> geostationary
>>>> >>> satellite (and many wireless ISPs)
>>>> >>>
>>>> >>> but when doing any serious testing of latency, you need to be wired to
>>>> >> the
>>>> >>> router, wifi introduces so much variability that it swamps the signal.
>>>> >>>
>>>> >>> David Lang
>>>> >>>
>>>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>> >>>
>>>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>> >>>> From: David P. Reed <dpreed@deepplum.com>
>>>> >>>> To: starlink@lists.bufferbloat.net
>>>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>> >>>>
>>>> >>>>
>>>> >>>> Early measurements of performance of Starlink have shown significant
>>>> >> bufferbloat, as Dave Taht has shown.
>>>> >>>>
>>>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>>> >> issue, it should be completely manageable, starting by simple firmware
>>>> >> changes inside the Starlink system itself. For example, implementing
>>>> >> fq_codel so that bottleneck links just drop packets according to the
>>>> Best
>>>> >> Practices RFC,
>>>> >>>>
>>>> >>>> So I'm hoping this has improved since Dave's measurements. How much
>>>> has
>>>> >> it improved? What's the current maximum packet latency under full
>>>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>>> *ping
>>>> >> times under full load*, but he wasn't using flent or some other
>>>> measurement
>>>> >> tool of good quality that gives a true number.
>>>> >>>>
>>>> >>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>>> >> want latencies significantly less than 100 msec. as a rule of thumb for
>>>> >> teleconferencing quality). But it is better than Dave's measurements
>>>> showed.
>>>> >>>>
>>>> >>>> Now Musk bragged that his network was "low latency" unlike other high
>>>> >> speed services, which means low end-to-end latency.  That got him
>>>> >> permission from the FCC to operate Starlink at all. His number was, I
>>>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because
>>>> he
>>>> >> probably meant just the time from the ground station to the terminal
>>>> >> through the satellite. But I regularly get 17 msec. between California
>>>> and
>>>> >> Massachusetts over the public Internet)
>>>> >>>>
>>>> >>>> So 84 might be the current status. That would mean that someone at
>>>> >> Srarlink might be paying some attention, but it is a long way from what
>>>> >> Musk implied.
>>>> >>>>
>>>> >>>>
>>>> >>>> PS: I forget the number of the RFC, but the number of packets queued
>>>> on
>>>> >> an egress link should be chosen by taking the hardware bottleneck
>>>> >> throughput of any path, combined with an end-to-end Internet underlying
>>>> >> delay of about 10 msec. to account for hops between source and
>>>> destination.
>>>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>>> limited
>>>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets
>>>> from
>>>> >> each terminal of buffering, total, in the path from terminal to public
>>>> >> Internet, assuming the connection to the public Internet is not a
>>>> problem.
>>>> >>> _______________________________________________
>>>> >>> Starlink mailing list
>>>> >>> Starlink@lists.bufferbloat.net
>>>> >>>
>>>> >>
>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>> >>>
>>>> >>> _______________________________________________
>>>> >> Starlink mailing list
>>>> >> Starlink@lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>> >>
>>>> >
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>
>
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-17 18:36                   ` David P. Reed
  2021-07-17 18:42                     ` David Lang
@ 2021-07-18 19:05                     ` David Lang
  1 sibling, 0 replies; 37+ messages in thread
From: David Lang @ 2021-07-18 19:05 UTC (permalink / raw)
  To: David P. Reed; +Cc: Nathan Owens, Mike Puchol, David Lang, starlink

[-- Attachment #1: Type: text/plain, Size: 13659 bytes --]

a short (6 min) video that shows Elon's tweets on the topic this week
https://www.youtube.com/watch?v=FJBslid3aeI
much better info than the third-hand report I linked to earlier this week.

from this, starlink is not just bent pipes, and sat-sat relays are in the works.

he says that at 550km, average up/down time is ~10ms, so he says stupid things 
are happening to the packets to drive latency >20ms

David Lang

On Sat, 17 Jul 2021, David P. Reed wrote:

> Just to clarify, Starlink does NOT do inter-satellite data transfer. Each satellite is a "bent-pipe" which just reflects data transmitted to it back down.
>
> NY-Tokyo isn't line of sight, even at LEO. This is why GEO satellites are used.
>
> Iridium had the ability to inter-satellite routing. However, still, multiple hops are needed, and tracking all the other satellites in view is not easy, and stable routing management would be non-trivial indeed. (not counting the much lower power/bit available between battery powered satellites that aren't in the sun half the time. That was Iridium's basic technical issue - battery storage. 60 satellites were barely tractable for routing.
>
> There seems to be a lot of imagination by the credulous technical community going into dreaming about what Starlink actually can achieve. Those who have actually worked with satellite systems know there is no magical genius that Musk has. He just launches lots of simple orbiting "mirrors" (active repeaters) close to the Earth's surface. Pragmatic engineering, exploiting better digital signal processing, lower power digital systems, better antenna systems that use MIMO (or phased array, or whatever you want to call it).
>
>
> The reason it is all so hush-hush, to this old engineer, anyway, is the usual attempt to exploit the Wizard of Oz effect. (Trade secrets don't work well, so Musk would have filed patents worldwide if he had any novel special engineering magic). Musk is great at exploiting publicity to get people to think there's an all-powerful wizard behind the scenes. There isn't. This is like the belief that there is lots of highly classified technology that scientists don't understand unless cleared. There really isn't that much - the classification is about hiding all the weaknesses in military systems. A good thing, but not proof of mysterious magical science beyond commercial practice. Star Wars was this kind of exploitation of Magical Thinking in the public. A marketing blitz to con people who projected their imagination on a fairly mundane engineering project with lots of weaknesses.
>
> Now I'm happy to see Musk lose money hand over fist to show us how bent-pipe LEO systems can work somewhat reliably. What else should a billionaire do to make his money useful? We learn stuff that way. And issues like bufferbloat get rediscovered, and fixed, if his team pays attention. This is good in the long run.
>
> But customers who bought Tesla's expecting Autopilot to become self-driving? Or people who bought Tesla stock thinking no other companies could compete? They are waiting for something that is not real.
>
> On Friday, July 16, 2021 1:35pm, "Nathan Owens" <nathan@nathan.io> said:
>
>> The other case where they could provide benefit is very long distance paths
>> --- NY to Tokyo, Johannesburg to London, etc... but presumably at high
>> cost, as the capacity will likely be much lower than submarine cables.
>> 
>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:
>> 
>>> Satellite optical links are useful to extend coverage to areas where you
>>> don’t have gateways - thus, they will introduce additional latency
>>> compared
>>> to two space segment hops (terminal to satellite -> satellite to gateway).
>>> If you have terminal to satellite, two optical hops, then final satellite
>>> to gateway, you will have more latency, not less.
>>>
>>> We are being “sold” optical links for what they are not IMHO.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>
>>> > As there are more satellites, the up down time will get closer to 4-5ms
>>> rather then the ~7ms you list
>>>
>>> Possibly, if you do steering to always jump to the lowest latency
>>> satellite.
>>>
>>> > with laser relays in orbit, and terminal to terminal routing in orbit,
>>> there is the potential for the theoretical minimum to tend lower
>>> Maybe for certain users really in the middle of nowhere, but I did the
>>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>
>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>
>>>> hey, it's a good attitude to have :-)
>>>>
>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>>> very
>>>> close to the goal, if not exceed it.
>>>>
>>>> As there are more staellites, the up down time will get closer to 4-5ms
>>>> rather
>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>>> terminal
>>>> routing in orbit, there is the potential for the theoretical minimum to
>>>> tend
>>>> lower, giving some headroom for other overhead but still being in the 20ms
>>>> range.
>>>>
>>>> David Lang
>>>>
>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>
>>>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>>>> crazy
>>>> > if you do some basic math:
>>>> >
>>>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>> >   - Total one-way delay: 4.3 - 11.1ms
>>>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>> >
>>>> > This includes no transmission delay, queuing delay,
>>>> > processing/fragmentation/reassembly/etc, and no time-division
>>>> multiplexing.
>>>> >
>>>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>> >
>>>> >> I think it depends on if you are looking at datacenter-to-datacenter
>>>> >> latency of
>>>> >> home to remote datacenter latency :-)
>>>> >>
>>>> >> my rule of thumb for cross US ping time has been 80-100ms latency (but
>>>> >> it's been
>>>> >> a few years since I tested it).
>>>> >>
>>>> >> I note that an article I saw today said that Elon is saying that
>>>> latency
>>>> >> will
>>>> >> improve significantly in the near future, that up/down latency is ~20ms
>>>> >> and the
>>>> >> additional delays pushing it to the 80ms range are 'stupid packet
>>>> routing'
>>>> >> problems that they are working on.
>>>> >>
>>>> >> If they are still in that level of optimization, it doesn't surprise me
>>>> >> that
>>>> >> they haven't really focused on the bufferbloat issue, they have more
>>>> >> obvious
>>>> >> stuff to fix first.
>>>> >>
>>>> >> David Lang
>>>> >>
>>>> >>
>>>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>> >>
>>>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> Hi David
>>>> >>> In terms of the Latency that David (Reed) mentioned for California to
>>>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>>>> faster
>>>> >> than what I would expect. My own traceroute via my VDSL link shows 14ms
>>>> >> just to get out of the operator network.
>>>> >>>
>>>> >>> https://www.wondernetwork.com  is a handy tool for checking
>>>> geographic
>>>> >> ping perf between cities, and it shows a min of about 66ms for pings
>>>> >> between Boston and San Diego
>>>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms for
>>>> >> 1-way transfer).
>>>> >>>
>>>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>>> light
>>>> >> (through a pure fibre link of that distance) the propagation time is
>>>> just
>>>> >> over 20ms. If the network equipment between the Boston and San Diego is
>>>> >> factored in, with some buffering along the way, 33ms does seem quite
>>>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>>>> transfer
>>>> >>>
>>>> >>> -Ian Wheelock
>>>> >>>
>>>> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf of
>>>> >> David Lang <david@lang.hm>
>>>> >>> Date: Friday 9 July 2021 at 23:59
>>>> >>> To: "David P. Reed" <dpreed@deepplum.com>
>>>> >>> Cc: "starlink@lists.bufferbloat.net" <starlink@lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>>>> landlines,
>>>> >> and worlds better than geostationary satellite (and many
>>>> >>> External (mailto:david@lang.hm)
>>>> >>>
>>>> >>
>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and
>>>> >>> Musk was predicting <40ms.
>>>> >>>
>>>> >>> roughly competitive with landlines, and worlds better than
>>>> geostationary
>>>> >>> satellite (and many wireless ISPs)
>>>> >>>
>>>> >>> but when doing any serious testing of latency, you need to be wired to
>>>> >> the
>>>> >>> router, wifi introduces so much variability that it swamps the signal.
>>>> >>>
>>>> >>> David Lang
>>>> >>>
>>>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>> >>>
>>>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>> >>>> From: David P. Reed <dpreed@deepplum.com>
>>>> >>>> To: starlink@lists.bufferbloat.net
>>>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>> >>>>
>>>> >>>>
>>>> >>>> Early measurements of performance of Starlink have shown significant
>>>> >> bufferbloat, as Dave Taht has shown.
>>>> >>>>
>>>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a hardware
>>>> >> issue, it should be completely manageable, starting by simple firmware
>>>> >> changes inside the Starlink system itself. For example, implementing
>>>> >> fq_codel so that bottleneck links just drop packets according to the
>>>> Best
>>>> >> Practices RFC,
>>>> >>>>
>>>> >>>> So I'm hoping this has improved since Dave's measurements. How much
>>>> has
>>>> >> it improved? What's the current maximum packet latency under full
>>>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>>> *ping
>>>> >> times under full load*, but he wasn't using flent or some other
>>>> measurement
>>>> >> tool of good quality that gives a true number.
>>>> >>>>
>>>> >>>> 84 msec is not great - it's marginal for Zoom quality experience (you
>>>> >> want latencies significantly less than 100 msec. as a rule of thumb for
>>>> >> teleconferencing quality). But it is better than Dave's measurements
>>>> showed.
>>>> >>>>
>>>> >>>> Now Musk bragged that his network was "low latency" unlike other high
>>>> >> speed services, which means low end-to-end latency.  That got him
>>>> >> permission from the FCC to operate Starlink at all. His number was, I
>>>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5, because
>>>> he
>>>> >> probably meant just the time from the ground station to the terminal
>>>> >> through the satellite. But I regularly get 17 msec. between California
>>>> and
>>>> >> Massachusetts over the public Internet)
>>>> >>>>
>>>> >>>> So 84 might be the current status. That would mean that someone at
>>>> >> Srarlink might be paying some attention, but it is a long way from what
>>>> >> Musk implied.
>>>> >>>>
>>>> >>>>
>>>> >>>> PS: I forget the number of the RFC, but the number of packets queued
>>>> on
>>>> >> an egress link should be chosen by taking the hardware bottleneck
>>>> >> throughput of any path, combined with an end-to-end Internet underlying
>>>> >> delay of about 10 msec. to account for hops between source and
>>>> destination.
>>>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>>> limited
>>>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited to
>>>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250 packets
>>>> from
>>>> >> each terminal of buffering, total, in the path from terminal to public
>>>> >> Internet, assuming the connection to the public Internet is not a
>>>> problem.
>>>> >>> _______________________________________________
>>>> >>> Starlink mailing list
>>>> >>> Starlink@lists.bufferbloat.net
>>>> >>>
>>>> >>
>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>> >>>
>>>> >>> _______________________________________________
>>>> >> Starlink mailing list
>>>> >> Starlink@lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>> >>
>>>> >
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>
>
>

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 20:51             ` Michael Richardson
@ 2021-07-18 19:17               ` David Lang
  2021-07-18 22:29                 ` Dave Taht
  0 siblings, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-18 19:17 UTC (permalink / raw)
  To: Michael Richardson; +Cc: David Lang, Nathan Owens, starlink, David P. Reed

On Fri, 16 Jul 2021, Michael Richardson wrote:

> David Lang <david@lang.hm> wrote:
>    > As there are more staellites, the up down time will get closer to 4-5ms
>    > rather then the ~7ms you list, and with laser relays in orbit, and terminal
>    > to terminal routing in orbit, there is the potential for the theoretical
>    > minimum to tend lower, giving some headroom for other overhead but still
>    > being in the 20ms range.
>
> I really want this to happen, but how will this get managed?
> We will don't know shit, and I'm not convinced SpaceX knows either.
>
> I'm scared that these paths will centrally managed, and not based upon
> longest prefix (IPv6) match.

unless you are going to have stations changing their IPv6 address frequently, I 
don't see how you would route based on their address. The system is extremely 
dynamic, and propgating routing tables would be a huge overhead. Remember, 
stations are not in fixed locations, they move (and if on an airliner or rocket, 
they move pretty quickly)

I expect that initially it's going to be centrally managed, but over time I 
would expect that it would become more decentralized.

David Lang

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-18 19:17               ` David Lang
@ 2021-07-18 22:29                 ` Dave Taht
  2021-07-19  1:30                   ` David Lang
  0 siblings, 1 reply; 37+ messages in thread
From: Dave Taht @ 2021-07-18 22:29 UTC (permalink / raw)
  To: David Lang; +Cc: Michael Richardson, starlink, David P. Reed

On Sun, Jul 18, 2021 at 12:17 PM David Lang <david@lang.hm> wrote:
>
> On Fri, 16 Jul 2021, Michael Richardson wrote:
>
> > David Lang <david@lang.hm> wrote:
> >    > As there are more staellites, the up down time will get closer to 4-5ms
> >    > rather then the ~7ms you list, and with laser relays in orbit, and terminal
> >    > to terminal routing in orbit, there is the potential for the theoretical
> >    > minimum to tend lower, giving some headroom for other overhead but still
> >    > being in the 20ms range.

I also used the 20ms figure in my podcast.
https://twit.tv/shows/floss-weekly/episodes/638?autostart=false

I like to think with FQ in the loop, they can do even better for
"ping" and gaming, but for a change I'd like to see elon actually
underpromise and overdeliver.

> >
> > I really want this to happen, but how will this get managed?
> > We will don't know shit, and I'm not convinced SpaceX knows either.
> >
> > I'm scared that these paths will centrally managed, and not based upon
> > longest prefix (IPv6) match.
>
> unless you are going to have stations changing their IPv6 address frequently,

I'd really, really, hope for a dedicated /56 per station, no changes,
EVER, unless the user requests it it falls under attack. Perhaps two
/56s for failover reasons. Really silly
to make ipv6 dynamic in this environment.

Sure wish some ipv4/12 was available. Dynamic ipv4 doesn't make much
sense anymore either.

> don't see how you would route based on their address. The system is extremely
> dynamic, and propgating routing tables would be a huge overhead. Remember,
> stations are not in fixed locations, they move (and if on an airliner or rocket,
> they move pretty quickly)

IMHO:

IF you were to go about designing a sane and fast planetary “mac”
layer that could carry ipv4 and ipv6 (and hopefully ICN), traffic, on
stuff in orbit...

( some relevant recent complaints over on this thread:
https://www.linkedin.com/feed/update/urn:li:activity:6817609260348911616/
)

… and you were for !@#! sure that you would never have more than 2^32
exit nodes, a bunch of things  get simpler. All you need to do on the
ground is pick the exit node, with a couple other fields and the
payload. How something gets there is the network’s problem. All the l3
stuff “up there” just vanishes. You end up with a very, very simple
and fast global routing table "up there" that is gps clock based, and
only needs to be updated in case of a sat failure or to route around
congestion (roughly the same thing)

You do the complicated ipv4 or ipv6 route matching on the ground
before translating it to L2 ... just to pick the *exit node*. Choosing
nexthops on the ground happen on a schedule, and “nexthop” behavior on
the sats in orbit or the solar system is also extremely predictable.
If you yourself are moving, that too is extremely predictable. Other
truly needed fields carried from l3 to the l2 layer should be kind of
obvious if you look at the flaws in the ethernet and mpls “mac”,
relative to what ipv6 tried to (over)do, what we've learned from
history, and think about what a globally sync’d clock can do for you,
as well as a hashed flowid.

I worked a bunch of this out in the period 1989-92 when I was working
on a hard SF book “centered” on the asteroid Toutatis[1] and IPng was
starting to bake (and I was still involved in ISO, I didn't really
become an ipv4 convert until I got fed up with ISO in 1991 or so). If
you spend enough time flying around the solar system from inside an
epicycle (try it! Download celestia or try https://www.asterank.com/
and take a few rides on a rock and note the timescales at which any
immediate routing change might need to be made) some things become
obvious...

Doesn't mean that I was right, but starlink's layer two design hasn't
been published so I figure I should publish what I think I already
knew before they tell me....

[1] Some info on "the toutatis way":
https://web.archive.org/web/20040415014525/http://www.taht.net/~mtaht/asteroid/toutatis.html

Sadly I never finished writing the book. I’d had too much fun
designing the network and plumbing, and not on the characters or
conflict, and Vernor Vinge’s *awesome* “Fire upon the deep” came out
in 93, centered on one of “my” core ideas (leveraging netnews), and
had a better plot. And characters. And other crap happened to me. Ah,
well. I am thinking if ever I get the time I will try to pull at least
a short story out of it, someday.

I keep meaning to update that spreadsheet of fast rotators. Been a long time.

>
> I expect that initially it's going to be centrally managed, but over time I
> would expect that it would become more decentralized.

I am looking forward to getting a few more nodes that are closer together to
monitor with the cosmic background bufferbloat detector. Certainly precise
sync is probably a bad idea...


>
> David Lang
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-16 17:39                   ` Jonathan Bennett
@ 2021-07-19  1:05                     ` Nick Buraglio
  2021-07-19  1:20                       ` David Lang
  0 siblings, 1 reply; 37+ messages in thread
From: Nick Buraglio @ 2021-07-19  1:05 UTC (permalink / raw)
  To: Jonathan Bennett; +Cc: David P. Reed, Nathan Owens, starlink

[-- Attachment #1: Type: text/plain, Size: 12263 bytes --]

We keep saying “route”. What do we actually mean from a network stack
perspective? Are we talking about relaying light / frames / electric or do
we mean actual packet routing, because there are obviously a lot of
important distinctions there.
I’m willing to bet that there is no routing (as in layer 3 packet routing)
at all except the Dish NAT all the way into their peering data center. The
ground stations are very likely RF to fiber wave division back to a carrier
hotel with no L3 buffering at all. That keeps latency very low (think O-E-O
and E-O transitions) and moves L3 buffering to two locations and keeps the
terrestrial network very easy to make redundant (optical protection, etc.).

nb

On Fri, Jul 16, 2021 at 12:39 PM Jonathan Bennett <
jonathanbennett@hackaday.com> wrote:

>
>
> On Fri, Jul 16, 2021, 12:35 PM Nathan Owens <nathan@nathan.io> wrote:
>
>> The other case where they could provide benefit is very long distance
>> paths --- NY to Tokyo, Johannesburg to London, etc... but presumably at
>> high cost, as the capacity will likely be much lower than submarine cables.
>>
>>>
> Or traffic between Starlink customers. A video call between me and someone
> else on the Starlink network is going to be drastically better if it can
> route over the sats.
>
>>
>>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:
>>
>>> Satellite optical links are useful to extend coverage to areas where you
>>> don’t have gateways - thus, they will introduce additional latency compared
>>> to two space segment hops (terminal to satellite -> satellite to gateway).
>>> If you have terminal to satellite, two optical hops, then final satellite
>>> to gateway, you will have more latency, not less.
>>>
>>> We are being “sold” optical links for what they are not IMHO.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>
>>> > As there are more satellites, the up down time will get closer to
>>> 4-5ms rather then the ~7ms you list
>>>
>>> Possibly, if you do steering to always jump to the lowest latency
>>> satellite.
>>>
>>> > with laser relays in orbit, and terminal to terminal routing in orbit,
>>> there is the potential for the theoretical minimum to tend lower
>>> Maybe for certain users really in the middle of nowhere, but I did the
>>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>
>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>
>>>> hey, it's a good attitude to have :-)
>>>>
>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>>> very
>>>> close to the goal, if not exceed it.
>>>>
>>>> As there are more staellites, the up down time will get closer to 4-5ms
>>>> rather
>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>>> terminal
>>>> routing in orbit, there is the potential for the theoretical minimum to
>>>> tend
>>>> lower, giving some headroom for other overhead but still being in the
>>>> 20ms
>>>> range.
>>>>
>>>> David Lang
>>>>
>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>
>>>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>>>> crazy
>>>> > if you do some basic math:
>>>> >
>>>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>> >   - Total one-way delay: 4.3 - 11.1ms
>>>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>> >
>>>> > This includes no transmission delay, queuing delay,
>>>> > processing/fragmentation/reassembly/etc, and no time-division
>>>> multiplexing.
>>>> >
>>>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>> >
>>>> >> I think it depends on if you are looking at datacenter-to-datacenter
>>>> >> latency of
>>>> >> home to remote datacenter latency :-)
>>>> >>
>>>> >> my rule of thumb for cross US ping time has been 80-100ms latency
>>>> (but
>>>> >> it's been
>>>> >> a few years since I tested it).
>>>> >>
>>>> >> I note that an article I saw today said that Elon is saying that
>>>> latency
>>>> >> will
>>>> >> improve significantly in the near future, that up/down latency is
>>>> ~20ms
>>>> >> and the
>>>> >> additional delays pushing it to the 80ms range are 'stupid packet
>>>> routing'
>>>> >> problems that they are working on.
>>>> >>
>>>> >> If they are still in that level of optimization, it doesn't surprise
>>>> me
>>>> >> that
>>>> >> they haven't really focused on the bufferbloat issue, they have more
>>>> >> obvious
>>>> >> stuff to fix first.
>>>> >>
>>>> >> David Lang
>>>> >>
>>>> >>
>>>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>> >>
>>>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>> >>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>> >>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>> >>> Cc: "starlink@lists.bufferbloat.net" <
>>>> starlink@lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> Hi David
>>>> >>> In terms of the Latency that David (Reed) mentioned for California
>>>> to
>>>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>>>> faster
>>>> >> than what I would expect. My own traceroute via my VDSL link shows
>>>> 14ms
>>>> >> just to get out of the operator network.
>>>> >>>
>>>> >>> https://www.wondernetwork.com  is a handy tool for checking
>>>> geographic
>>>> >> ping perf between cities, and it shows a min of about 66ms for pings
>>>> >> between Boston and San Diego
>>>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms
>>>> for
>>>> >> 1-way transfer).
>>>> >>>
>>>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>>> light
>>>> >> (through a pure fibre link of that distance) the propagation time is
>>>> just
>>>> >> over 20ms. If the network equipment between the Boston and San Diego
>>>> is
>>>> >> factored in, with some buffering along the way, 33ms does seem quite
>>>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>>>> transfer
>>>> >>>
>>>> >>> -Ian Wheelock
>>>> >>>
>>>> >>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf
>>>> of
>>>> >> David Lang <david@lang.hm>
>>>> >>> Date: Friday 9 July 2021 at 23:59
>>>> >>> To: "David P. Reed" <dpreed@deepplum.com>
>>>> >>> Cc: "starlink@lists.bufferbloat.net" <
>>>> starlink@lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>>>> landlines,
>>>> >> and worlds better than geostationary satellite (and many
>>>> >>> External (mailto:david@lang.hm)
>>>> >>>
>>>> >>
>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and
>>>> >>> Musk was predicting <40ms.
>>>> >>>
>>>> >>> roughly competitive with landlines, and worlds better than
>>>> geostationary
>>>> >>> satellite (and many wireless ISPs)
>>>> >>>
>>>> >>> but when doing any serious testing of latency, you need to be wired
>>>> to
>>>> >> the
>>>> >>> router, wifi introduces so much variability that it swamps the
>>>> signal.
>>>> >>>
>>>> >>> David Lang
>>>> >>>
>>>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>> >>>
>>>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>> >>>> From: David P. Reed <dpreed@deepplum.com>
>>>> >>>> To: starlink@lists.bufferbloat.net
>>>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>> >>>>
>>>> >>>>
>>>> >>>> Early measurements of performance of Starlink have shown
>>>> significant
>>>> >> bufferbloat, as Dave Taht has shown.
>>>> >>>>
>>>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a
>>>> hardware
>>>> >> issue, it should be completely manageable, starting by simple
>>>> firmware
>>>> >> changes inside the Starlink system itself. For example, implementing
>>>> >> fq_codel so that bottleneck links just drop packets according to the
>>>> Best
>>>> >> Practices RFC,
>>>> >>>>
>>>> >>>> So I'm hoping this has improved since Dave's measurements. How
>>>> much has
>>>> >> it improved? What's the current maximum packet latency under full
>>>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>>> *ping
>>>> >> times under full load*, but he wasn't using flent or some other
>>>> measurement
>>>> >> tool of good quality that gives a true number.
>>>> >>>>
>>>> >>>> 84 msec is not great - it's marginal for Zoom quality experience
>>>> (you
>>>> >> want latencies significantly less than 100 msec. as a rule of thumb
>>>> for
>>>> >> teleconferencing quality). But it is better than Dave's measurements
>>>> showed.
>>>> >>>>
>>>> >>>> Now Musk bragged that his network was "low latency" unlike other
>>>> high
>>>> >> speed services, which means low end-to-end latency.  That got him
>>>> >> permission from the FCC to operate Starlink at all. His number was, I
>>>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5,
>>>> because he
>>>> >> probably meant just the time from the ground station to the terminal
>>>> >> through the satellite. But I regularly get 17 msec. between
>>>> California and
>>>> >> Massachusetts over the public Internet)
>>>> >>>>
>>>> >>>> So 84 might be the current status. That would mean that someone at
>>>> >> Srarlink might be paying some attention, but it is a long way from
>>>> what
>>>> >> Musk implied.
>>>> >>>>
>>>> >>>>
>>>> >>>> PS: I forget the number of the RFC, but the number of packets
>>>> queued on
>>>> >> an egress link should be chosen by taking the hardware bottleneck
>>>> >> throughput of any path, combined with an end-to-end Internet
>>>> underlying
>>>> >> delay of about 10 msec. to account for hops between source and
>>>> destination.
>>>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>>> limited
>>>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited
>>>> to
>>>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250
>>>> packets from
>>>> >> each terminal of buffering, total, in the path from terminal to
>>>> public
>>>> >> Internet, assuming the connection to the public Internet is not a
>>>> problem.
>>>> >>> _______________________________________________
>>>> >>> Starlink mailing list
>>>> >>> Starlink@lists.bufferbloat.net
>>>> >>>
>>>> >>
>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>> >>>
>>>> >>> _______________________________________________
>>>> >> Starlink mailing list
>>>> >> Starlink@lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>> >>
>>>> >
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 18633 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-19  1:05                     ` Nick Buraglio
@ 2021-07-19  1:20                       ` David Lang
  2021-07-19  1:34                         ` Nick Buraglio
  0 siblings, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-19  1:20 UTC (permalink / raw)
  To: Nick Buraglio; +Cc: Jonathan Bennett, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 12315 bytes --]

Elon is talking about a viable path in the future being dishy - sat - sat - 
dishy

They aren't there yet, but they are sure planning on it

David Lang

On Sun, 18 Jul 2021, Nick Buraglio wrote:

> We keep saying “route”. What do we actually mean from a network stack
> perspective? Are we talking about relaying light / frames / electric or do
> we mean actual packet routing, because there are obviously a lot of
> important distinctions there.
> I’m willing to bet that there is no routing (as in layer 3 packet routing)
> at all except the Dish NAT all the way into their peering data center. The
> ground stations are very likely RF to fiber wave division back to a carrier
> hotel with no L3 buffering at all. That keeps latency very low (think O-E-O
> and E-O transitions) and moves L3 buffering to two locations and keeps the
> terrestrial network very easy to make redundant (optical protection, etc.).
>
> nb
>
> On Fri, Jul 16, 2021 at 12:39 PM Jonathan Bennett <
> jonathanbennett@hackaday.com> wrote:
>
>>
>>
>> On Fri, Jul 16, 2021, 12:35 PM Nathan Owens <nathan@nathan.io> wrote:
>>
>>> The other case where they could provide benefit is very long distance
>>> paths --- NY to Tokyo, Johannesburg to London, etc... but presumably at
>>> high cost, as the capacity will likely be much lower than submarine cables.
>>>
>>>>
>> Or traffic between Starlink customers. A video call between me and someone
>> else on the Starlink network is going to be drastically better if it can
>> route over the sats.
>>
>>>
>>>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx> wrote:
>>>
>>>> Satellite optical links are useful to extend coverage to areas where you
>>>> don’t have gateways - thus, they will introduce additional latency compared
>>>> to two space segment hops (terminal to satellite -> satellite to gateway).
>>>> If you have terminal to satellite, two optical hops, then final satellite
>>>> to gateway, you will have more latency, not less.
>>>>
>>>> We are being “sold” optical links for what they are not IMHO.
>>>>
>>>> Best,
>>>>
>>>> Mike
>>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
>>>>
>>>>> As there are more satellites, the up down time will get closer to
>>>> 4-5ms rather then the ~7ms you list
>>>>
>>>> Possibly, if you do steering to always jump to the lowest latency
>>>> satellite.
>>>>
>>>>> with laser relays in orbit, and terminal to terminal routing in orbit,
>>>> there is the potential for the theoretical minimum to tend lower
>>>> Maybe for certain users really in the middle of nowhere, but I did the
>>>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>>
>>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
>>>>
>>>>> hey, it's a good attitude to have :-)
>>>>>
>>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>>>> very
>>>>> close to the goal, if not exceed it.
>>>>>
>>>>> As there are more staellites, the up down time will get closer to 4-5ms
>>>>> rather
>>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>>>> terminal
>>>>> routing in orbit, there is the potential for the theoretical minimum to
>>>>> tend
>>>>> lower, giving some headroom for other overhead but still being in the
>>>>> 20ms
>>>>> range.
>>>>>
>>>>> David Lang
>>>>>
>>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>>
>>>>>> Elon said "foolish packet routing" for things over 20ms! Which seems
>>>>> crazy
>>>>>> if you do some basic math:
>>>>>>
>>>>>>   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>>>>   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>>>>   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>>>>   - Total one-way delay: 4.3 - 11.1ms
>>>>>>   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>>>>
>>>>>> This includes no transmission delay, queuing delay,
>>>>>> processing/fragmentation/reassembly/etc, and no time-division
>>>>> multiplexing.
>>>>>>
>>>>>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
>>>>>>
>>>>>>> I think it depends on if you are looking at datacenter-to-datacenter
>>>>>>> latency of
>>>>>>> home to remote datacenter latency :-)
>>>>>>>
>>>>>>> my rule of thumb for cross US ping time has been 80-100ms latency
>>>>> (but
>>>>>>> it's been
>>>>>>> a few years since I tested it).
>>>>>>>
>>>>>>> I note that an article I saw today said that Elon is saying that
>>>>> latency
>>>>>>> will
>>>>>>> improve significantly in the near future, that up/down latency is
>>>>> ~20ms
>>>>>>> and the
>>>>>>> additional delays pushing it to the 80ms range are 'stupid packet
>>>>> routing'
>>>>>>> problems that they are working on.
>>>>>>>
>>>>>>> If they are still in that level of optimization, it doesn't surprise
>>>>> me
>>>>>>> that
>>>>>>> they haven't really focused on the bufferbloat issue, they have more
>>>>>>> obvious
>>>>>>> stuff to fix first.
>>>>>>>
>>>>>>> David Lang
>>>>>>>
>>>>>>>
>>>>>>>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>>>>>
>>>>>>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>>>>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
>>>>>>>> To: David Lang <david@lang.hm>, David P. Reed <dpreed@deepplum.com>
>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <
>>>>> starlink@lists.bufferbloat.net>
>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>
>>>>>>>> Hi David
>>>>>>>> In terms of the Latency that David (Reed) mentioned for California
>>>>> to
>>>>>>> Massachusetts of about 17ms over the public internet, it seems a bit
>>>>> faster
>>>>>>> than what I would expect. My own traceroute via my VDSL link shows
>>>>> 14ms
>>>>>>> just to get out of the operator network.
>>>>>>>>
>>>>>>>> https://www.wondernetwork.com  is a handy tool for checking
>>>>> geographic
>>>>>>> ping perf between cities, and it shows a min of about 66ms for pings
>>>>>>> between Boston and San Diego
>>>>>>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms
>>>>> for
>>>>>>> 1-way transfer).
>>>>>>>>
>>>>>>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>>>> light
>>>>>>> (through a pure fibre link of that distance) the propagation time is
>>>>> just
>>>>>>> over 20ms. If the network equipment between the Boston and San Diego
>>>>> is
>>>>>>> factored in, with some buffering along the way, 33ms does seem quite
>>>>>>> reasonable over the 20ms for speed of light in fibre for that 1-way
>>>>> transfer
>>>>>>>>
>>>>>>>> -Ian Wheelock
>>>>>>>>
>>>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf
>>>>> of
>>>>>>> David Lang <david@lang.hm>
>>>>>>>> Date: Friday 9 July 2021 at 23:59
>>>>>>>> To: "David P. Reed" <dpreed@deepplum.com>
>>>>>>>> Cc: "starlink@lists.bufferbloat.net" <
>>>>> starlink@lists.bufferbloat.net>
>>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>>>>>>
>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>> 100ms, and Musk was predicting <40ms. roughly competitive with
>>>>> landlines,
>>>>>>> and worlds better than geostationary satellite (and many
>>>>>>>> External (mailto:david@lang.hm)
>>>>>>>>
>>>>>>>
>>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>>>>>  https://www.inky.com/banner-faq/  https://www.inky.com
>>>>>>>>
>>>>>>>> IIRC, the definition of 'low latency' for the FCC was something like
>>>>>>> 100ms, and
>>>>>>>> Musk was predicting <40ms.
>>>>>>>>
>>>>>>>> roughly competitive with landlines, and worlds better than
>>>>> geostationary
>>>>>>>> satellite (and many wireless ISPs)
>>>>>>>>
>>>>>>>> but when doing any serious testing of latency, you need to be wired
>>>>> to
>>>>>>> the
>>>>>>>> router, wifi introduces so much variability that it swamps the
>>>>> signal.
>>>>>>>>
>>>>>>>> David Lang
>>>>>>>>
>>>>>>>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>>>>>>
>>>>>>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>>>>>>> From: David P. Reed <dpreed@deepplum.com>
>>>>>>>>> To: starlink@lists.bufferbloat.net
>>>>>>>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Early measurements of performance of Starlink have shown
>>>>> significant
>>>>>>> bufferbloat, as Dave Taht has shown.
>>>>>>>>>
>>>>>>>>> But...  Starlink is a moving target. The bufferbloat isn't a
>>>>> hardware
>>>>>>> issue, it should be completely manageable, starting by simple
>>>>> firmware
>>>>>>> changes inside the Starlink system itself. For example, implementing
>>>>>>> fq_codel so that bottleneck links just drop packets according to the
>>>>> Best
>>>>>>> Practices RFC,
>>>>>>>>>
>>>>>>>>> So I'm hoping this has improved since Dave's measurements. How
>>>>> much has
>>>>>>> it improved? What's the current maximum packet latency under full
>>>>>>> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>>>> *ping
>>>>>>> times under full load*, but he wasn't using flent or some other
>>>>> measurement
>>>>>>> tool of good quality that gives a true number.
>>>>>>>>>
>>>>>>>>> 84 msec is not great - it's marginal for Zoom quality experience
>>>>> (you
>>>>>>> want latencies significantly less than 100 msec. as a rule of thumb
>>>>> for
>>>>>>> teleconferencing quality). But it is better than Dave's measurements
>>>>> showed.
>>>>>>>>>
>>>>>>>>> Now Musk bragged that his network was "low latency" unlike other
>>>>> high
>>>>>>> speed services, which means low end-to-end latency.  That got him
>>>>>>> permission from the FCC to operate Starlink at all. His number was, I
>>>>>>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5,
>>>>> because he
>>>>>>> probably meant just the time from the ground station to the terminal
>>>>>>> through the satellite. But I regularly get 17 msec. between
>>>>> California and
>>>>>>> Massachusetts over the public Internet)
>>>>>>>>>
>>>>>>>>> So 84 might be the current status. That would mean that someone at
>>>>>>> Srarlink might be paying some attention, but it is a long way from
>>>>> what
>>>>>>> Musk implied.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> PS: I forget the number of the RFC, but the number of packets
>>>>> queued on
>>>>>>> an egress link should be chosen by taking the hardware bottleneck
>>>>>>> throughput of any path, combined with an end-to-end Internet
>>>>> underlying
>>>>>>> delay of about 10 msec. to account for hops between source and
>>>>> destination.
>>>>>>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>>>> limited
>>>>>>> to 10,000 bits (1500 * 8), so the outbound queues should be limited
>>>>> to
>>>>>>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250
>>>>> packets from
>>>>>>> each terminal of buffering, total, in the path from terminal to
>>>>> public
>>>>>>> Internet, assuming the connection to the public Internet is not a
>>>>> problem.
>>>>>>>> _______________________________________________
>>>>>>>> Starlink mailing list
>>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>>>
>>>>>>>
>>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>>
>>>>>>
>>>>>
>>>> _______________________________________________
>>>> Starlink mailing list
>>>> Starlink@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>
>>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
>

[-- Attachment #2: Type: text/plain, Size: 149 bytes --]

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-18 22:29                 ` Dave Taht
@ 2021-07-19  1:30                   ` David Lang
  2021-07-19 12:14                     ` Michael Richardson
  0 siblings, 1 reply; 37+ messages in thread
From: David Lang @ 2021-07-19  1:30 UTC (permalink / raw)
  To: Dave Taht; +Cc: David Lang, Michael Richardson, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 3573 bytes --]

On Sun, 18 Jul 2021, Dave Taht wrote:

>>> I really want this to happen, but how will this get managed?
>>> We will don't know shit, and I'm not convinced SpaceX knows either.
>>>
>>> I'm scared that these paths will centrally managed, and not based upon
>>> longest prefix (IPv6) match.
>>
>> unless you are going to have stations changing their IPv6 address frequently,
>
> I'd really, really, hope for a dedicated /56 per station, no changes,
> EVER, unless the user requests it it falls under attack. Perhaps two
> /56s for failover reasons. Really silly
> to make ipv6 dynamic in this environment.

if you are going to route based on IPv6 prefixes, then the prefixes need to have 
a close relationship to location (which is what routing needs to care about). If 
you have fixed addresses and mobile stations, you can't route based on address 
prefixes.

>> don't see how you would route based on their address. The system is extremely
>> dynamic, and propgating routing tables would be a huge overhead. Remember,
>> stations are not in fixed locations, they move (and if on an airliner or rocket,
>> they move pretty quickly)
>
> IMHO:
>
> IF you were to go about designing a sane and fast planetary “mac”
> layer that could carry ipv4 and ipv6 (and hopefully ICN), traffic, on
> stuff in orbit...
>
> ( some relevant recent complaints over on this thread:
> https://www.linkedin.com/feed/update/urn:li:activity:6817609260348911616/
> )
>
> … and you were for !@#! sure that you would never have more than 2^32
> exit nodes, a bunch of things  get simpler. All you need to do on the
> ground is pick the exit node, with a couple other fields and the
> payload. How something gets there is the network’s problem. All the l3
> stuff “up there” just vanishes. You end up with a very, very simple
> and fast global routing table "up there" that is gps clock based, and
> only needs to be updated in case of a sat failure or to route around
> congestion (roughly the same thing)

remember, every terminal is potentially an exit node, and they are already 
licensed for 1m nodes in the US alone, so I would not bet on never exceeding 
2^32 nodes

> You do the complicated ipv4 or ipv6 route matching on the ground
> before translating it to L2 ... just to pick the *exit node*. Choosing
> nexthops on the ground happen on a schedule, and “nexthop” behavior on
> the sats in orbit or the solar system is also extremely predictable.
> If you yourself are moving, that too is extremely predictable. Other
> truly needed fields carried from l3 to the l2 layer should be kind of
> obvious if you look at the flaws in the ethernet and mpls “mac”,
> relative to what ipv6 tried to (over)do, what we've learned from
> history, and think about what a globally sync’d clock can do for you,
> as well as a hashed flowid.
>

IMHO, the sending station should not know or care what exit nodes exist, so how 
would they go about picking one? Or if it should care, how would it pick which 
one is best? Best is a combination of 'shortest path to destination' and 'avoid 
overly busy nodes'. How would you get that topology info to every station?

One of the wonderful things about the Internet is that no device needs to 
understand the entire network. They just need information about which next hop 
to use to get to different destiantions. When the route changes, you don't have 
to update every device on the network, just routers up to the point where you 
can aggregate routes (and again, you hit the static addressing vs dynamic 
stations problem)

David Lang


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-19  1:20                       ` David Lang
@ 2021-07-19  1:34                         ` Nick Buraglio
  0 siblings, 0 replies; 37+ messages in thread
From: Nick Buraglio @ 2021-07-19  1:34 UTC (permalink / raw)
  To: David Lang; +Cc: David P. Reed, Jonathan Bennett, starlink

[-- Attachment #1: Type: text/plain, Size: 13682 bytes --]

No requirement for layer3 for that. I’d bet money they’ll keep L3 out of
space.

nb

On Sun, Jul 18, 2021 at 8:20 PM David Lang <david@lang.hm> wrote:

> Elon is talking about a viable path in the future being dishy - sat - sat
> -
> dishy
>
> They aren't there yet, but they are sure planning on it
>
> David Lang
>
> On Sun, 18 Jul 2021, Nick Buraglio wrote:
>
> > We keep saying “route”. What do we actually mean from a network stack
> > perspective? Are we talking about relaying light / frames / electric or
> do
> > we mean actual packet routing, because there are obviously a lot of
> > important distinctions there.
> > I’m willing to bet that there is no routing (as in layer 3 packet
> routing)
> > at all except the Dish NAT all the way into their peering data center.
> The
> > ground stations are very likely RF to fiber wave division back to a
> carrier
> > hotel with no L3 buffering at all. That keeps latency very low (think
> O-E-O
> > and E-O transitions) and moves L3 buffering to two locations and keeps
> the
> > terrestrial network very easy to make redundant (optical protection,
> etc.).
> >
> > nb
> >
> > On Fri, Jul 16, 2021 at 12:39 PM Jonathan Bennett <
> > jonathanbennett@hackaday.com> wrote:
> >
> >>
> >>
> >> On Fri, Jul 16, 2021, 12:35 PM Nathan Owens <nathan@nathan.io> wrote:
> >>
> >>> The other case where they could provide benefit is very long distance
> >>> paths --- NY to Tokyo, Johannesburg to London, etc... but presumably at
> >>> high cost, as the capacity will likely be much lower than submarine
> cables.
> >>>
> >>>>
> >> Or traffic between Starlink customers. A video call between me and
> someone
> >> else on the Starlink network is going to be drastically better if it can
> >> route over the sats.
> >>
> >>>
> >>>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike@starlink.sx>
> wrote:
> >>>
> >>>> Satellite optical links are useful to extend coverage to areas where
> you
> >>>> don’t have gateways - thus, they will introduce additional latency
> compared
> >>>> to two space segment hops (terminal to satellite -> satellite to
> gateway).
> >>>> If you have terminal to satellite, two optical hops, then final
> satellite
> >>>> to gateway, you will have more latency, not less.
> >>>>
> >>>> We are being “sold” optical links for what they are not IMHO.
> >>>>
> >>>> Best,
> >>>>
> >>>> Mike
> >>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan@nathan.io>, wrote:
> >>>>
> >>>>> As there are more satellites, the up down time will get closer to
> >>>> 4-5ms rather then the ~7ms you list
> >>>>
> >>>> Possibly, if you do steering to always jump to the lowest latency
> >>>> satellite.
> >>>>
> >>>>> with laser relays in orbit, and terminal to terminal routing in
> orbit,
> >>>> there is the potential for the theoretical minimum to tend lower
> >>>> Maybe for certain users really in the middle of nowhere, but I did the
> >>>> best-case math for "bent pipe" in Seattle area, which is as good as
> it gets.
> >>>>
> >>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david@lang.hm> wrote:
> >>>>
> >>>>> hey, it's a good attitude to have :-)
> >>>>>
> >>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and
> come
> >>>>> very
> >>>>> close to the goal, if not exceed it.
> >>>>>
> >>>>> As there are more staellites, the up down time will get closer to
> 4-5ms
> >>>>> rather
> >>>>> then the ~7ms you list, and with laser relays in orbit, and terminal
> to
> >>>>> terminal
> >>>>> routing in orbit, there is the potential for the theoretical minimum
> to
> >>>>> tend
> >>>>> lower, giving some headroom for other overhead but still being in the
> >>>>> 20ms
> >>>>> range.
> >>>>>
> >>>>> David Lang
> >>>>>
> >>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
> >>>>>
> >>>>>> Elon said "foolish packet routing" for things over 20ms! Which seems
> >>>>> crazy
> >>>>>> if you do some basic math:
> >>>>>>
> >>>>>>   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
> >>>>>>   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
> >>>>>>   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
> >>>>>>   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
> >>>>>>   - Total one-way delay: 4.3 - 11.1ms
> >>>>>>   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
> >>>>>>
> >>>>>> This includes no transmission delay, queuing delay,
> >>>>>> processing/fragmentation/reassembly/etc, and no time-division
> >>>>> multiplexing.
> >>>>>>
> >>>>>> On Fri, Jul 16, 2021 at 10:09 AM David Lang <david@lang.hm> wrote:
> >>>>>>
> >>>>>>> I think it depends on if you are looking at
> datacenter-to-datacenter
> >>>>>>> latency of
> >>>>>>> home to remote datacenter latency :-)
> >>>>>>>
> >>>>>>> my rule of thumb for cross US ping time has been 80-100ms latency
> >>>>> (but
> >>>>>>> it's been
> >>>>>>> a few years since I tested it).
> >>>>>>>
> >>>>>>> I note that an article I saw today said that Elon is saying that
> >>>>> latency
> >>>>>>> will
> >>>>>>> improve significantly in the near future, that up/down latency is
> >>>>> ~20ms
> >>>>>>> and the
> >>>>>>> additional delays pushing it to the 80ms range are 'stupid packet
> >>>>> routing'
> >>>>>>> problems that they are working on.
> >>>>>>>
> >>>>>>> If they are still in that level of optimization, it doesn't
> surprise
> >>>>> me
> >>>>>>> that
> >>>>>>> they haven't really focused on the bufferbloat issue, they have
> more
> >>>>>>> obvious
> >>>>>>> stuff to fix first.
> >>>>>>>
> >>>>>>> David Lang
> >>>>>>>
> >>>>>>>
> >>>>>>>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
> >>>>>>>
> >>>>>>>> Date: Fri, 16 Jul 2021 10:21:52 +0000
> >>>>>>>> From: "Wheelock, Ian" <ian.wheelock@commscope.com>
> >>>>>>>> To: David Lang <david@lang.hm>, David P. Reed <
> dpreed@deepplum.com>
> >>>>>>>> Cc: "starlink@lists.bufferbloat.net" <
> >>>>> starlink@lists.bufferbloat.net>
> >>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
> >>>>>>>>
> >>>>>>>> Hi David
> >>>>>>>> In terms of the Latency that David (Reed) mentioned for California
> >>>>> to
> >>>>>>> Massachusetts of about 17ms over the public internet, it seems a
> bit
> >>>>> faster
> >>>>>>> than what I would expect. My own traceroute via my VDSL link shows
> >>>>> 14ms
> >>>>>>> just to get out of the operator network.
> >>>>>>>>
> >>>>>>>> https://www.wondernetwork.com  is a handy tool for checking
> >>>>> geographic
> >>>>>>> ping perf between cities, and it shows a min of about 66ms for
> pings
> >>>>>>> between Boston and San Diego
> >>>>>>> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms
> >>>>> for
> >>>>>>> 1-way transfer).
> >>>>>>>>
> >>>>>>>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
> >>>>> light
> >>>>>>> (through a pure fibre link of that distance) the propagation time
> is
> >>>>> just
> >>>>>>> over 20ms. If the network equipment between the Boston and San
> Diego
> >>>>> is
> >>>>>>> factored in, with some buffering along the way, 33ms does seem
> quite
> >>>>>>> reasonable over the 20ms for speed of light in fibre for that 1-way
> >>>>> transfer
> >>>>>>>>
> >>>>>>>> -Ian Wheelock
> >>>>>>>>
> >>>>>>>> From: Starlink <starlink-bounces@lists.bufferbloat.net> on behalf
> >>>>> of
> >>>>>>> David Lang <david@lang.hm>
> >>>>>>>> Date: Friday 9 July 2021 at 23:59
> >>>>>>>> To: "David P. Reed" <dpreed@deepplum.com>
> >>>>>>>> Cc: "starlink@lists.bufferbloat.net" <
> >>>>> starlink@lists.bufferbloat.net>
> >>>>>>>> Subject: Re: [Starlink] Starlink and bufferbloat status?
> >>>>>>>>
> >>>>>>>> IIRC, the definition of 'low latency' for the FCC was something
> like
> >>>>>>> 100ms, and Musk was predicting <40ms. roughly competitive with
> >>>>> landlines,
> >>>>>>> and worlds better than geostationary satellite (and many
> >>>>>>>> External (mailto:david@lang.hm)
> >>>>>>>>
> >>>>>>>
> >>>>>
> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
> >>>>>>>  https://www.inky.com/banner-faq/  https://www.inky.com
> >>>>>>>>
> >>>>>>>> IIRC, the definition of 'low latency' for the FCC was something
> like
> >>>>>>> 100ms, and
> >>>>>>>> Musk was predicting <40ms.
> >>>>>>>>
> >>>>>>>> roughly competitive with landlines, and worlds better than
> >>>>> geostationary
> >>>>>>>> satellite (and many wireless ISPs)
> >>>>>>>>
> >>>>>>>> but when doing any serious testing of latency, you need to be
> wired
> >>>>> to
> >>>>>>> the
> >>>>>>>> router, wifi introduces so much variability that it swamps the
> >>>>> signal.
> >>>>>>>>
> >>>>>>>> David Lang
> >>>>>>>>
> >>>>>>>> On Fri, 9 Jul 2021, David P. Reed wrote:
> >>>>>>>>
> >>>>>>>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
> >>>>>>>>> From: David P. Reed <dpreed@deepplum.com>
> >>>>>>>>> To: starlink@lists.bufferbloat.net
> >>>>>>>>> Subject: [Starlink] Starlink and bufferbloat status?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Early measurements of performance of Starlink have shown
> >>>>> significant
> >>>>>>> bufferbloat, as Dave Taht has shown.
> >>>>>>>>>
> >>>>>>>>> But...  Starlink is a moving target. The bufferbloat isn't a
> >>>>> hardware
> >>>>>>> issue, it should be completely manageable, starting by simple
> >>>>> firmware
> >>>>>>> changes inside the Starlink system itself. For example,
> implementing
> >>>>>>> fq_codel so that bottleneck links just drop packets according to
> the
> >>>>> Best
> >>>>>>> Practices RFC,
> >>>>>>>>>
> >>>>>>>>> So I'm hoping this has improved since Dave's measurements. How
> >>>>> much has
> >>>>>>> it improved? What's the current maximum packet latency under full
> >>>>>>> load,  Ive heard anecdotally that a friend of a friend gets 84
> msec.
> >>>>> *ping
> >>>>>>> times under full load*, but he wasn't using flent or some other
> >>>>> measurement
> >>>>>>> tool of good quality that gives a true number.
> >>>>>>>>>
> >>>>>>>>> 84 msec is not great - it's marginal for Zoom quality experience
> >>>>> (you
> >>>>>>> want latencies significantly less than 100 msec. as a rule of thumb
> >>>>> for
> >>>>>>> teleconferencing quality). But it is better than Dave's
> measurements
> >>>>> showed.
> >>>>>>>>>
> >>>>>>>>> Now Musk bragged that his network was "low latency" unlike other
> >>>>> high
> >>>>>>> speed services, which means low end-to-end latency.  That got him
> >>>>>>> permission from the FCC to operate Starlink at all. His number
> was, I
> >>>>>>> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5,
> >>>>> because he
> >>>>>>> probably meant just the time from the ground station to the
> terminal
> >>>>>>> through the satellite. But I regularly get 17 msec. between
> >>>>> California and
> >>>>>>> Massachusetts over the public Internet)
> >>>>>>>>>
> >>>>>>>>> So 84 might be the current status. That would mean that someone
> at
> >>>>>>> Srarlink might be paying some attention, but it is a long way from
> >>>>> what
> >>>>>>> Musk implied.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> PS: I forget the number of the RFC, but the number of packets
> >>>>> queued on
> >>>>>>> an egress link should be chosen by taking the hardware bottleneck
> >>>>>>> throughput of any path, combined with an end-to-end Internet
> >>>>> underlying
> >>>>>>> delay of about 10 msec. to account for hops between source and
> >>>>> destination.
> >>>>>>> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
> >>>>> limited
> >>>>>>> to 10,000 bits (1500 * 8), so the outbound queues should be limited
> >>>>> to
> >>>>>>> about 0.01 * 50,000,000 / 10,000, which comes out to about 250
> >>>>> packets from
> >>>>>>> each terminal of buffering, total, in the path from terminal to
> >>>>> public
> >>>>>>> Internet, assuming the connection to the public Internet is not a
> >>>>> problem.
> >>>>>>>> _______________________________________________
> >>>>>>>> Starlink mailing list
> >>>>>>>> Starlink@lists.bufferbloat.net
> >>>>>>>>
> >>>>>>>
> >>>>>
> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
> >>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>> Starlink mailing list
> >>>>>>> Starlink@lists.bufferbloat.net
> >>>>>>> https://lists.bufferbloat.net/listinfo/starlink
> >>>>>>>
> >>>>>>
> >>>>>
> >>>> _______________________________________________
> >>>> Starlink mailing list
> >>>> Starlink@lists.bufferbloat.net
> >>>> https://lists.bufferbloat.net/listinfo/starlink
> >>>>
> >>>> _______________________________________________
> >>> Starlink mailing list
> >>> Starlink@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/starlink
> >>>
> >> _______________________________________________
> >> Starlink mailing list
> >> Starlink@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/starlink
> >>
> >_______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>

[-- Attachment #2: Type: text/html, Size: 23021 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [Starlink] Starlink and bufferbloat status?
  2021-07-19  1:30                   ` David Lang
@ 2021-07-19 12:14                     ` Michael Richardson
  0 siblings, 0 replies; 37+ messages in thread
From: Michael Richardson @ 2021-07-19 12:14 UTC (permalink / raw)
  To: David Lang, Dave Taht, starlink, David P. Reed

[-- Attachment #1: Type: text/plain, Size: 1865 bytes --]


David Lang <david@lang.hm> wrote:
    mcr> I'm scared that these paths will centrally managed, and not based upon
    mcr> longest prefix (IPv6) match.

    david> unless you are going to have stations changing their IPv6 address frequently,

    dt> I'd really, really, hope for a dedicated /56 per station, no changes,
    dt> EVER, unless the user requests it it falls under attack. Perhaps two
    dt> /56s for failover reasons. Really silly
    dt> to make ipv6 dynamic in this environment.

    david> if you are going to route based on IPv6 prefixes, then the
    david> prefixes need to have a close relationship to location (which is
    david> what routing needs to care about). If you have fixed addresses and
    david> mobile stations, you can't route based on address prefixes.

Well, you could go all way to https://datatracker.ietf.org/doc/html/draft-hain-ipv6-geo-addr-02
This makes most sense if you then have SHIM6 (being revised soon), and/or
MPTCP to move from this overlay network to the transport network.

    > One of the wonderful things about the Internet is that no device needs to
    > understand the entire network. They just need information about which next
    > hop to use to get to different destiantions.

This is a really important thing to remember.
RFC6550 adds the RFC6553 header to deal with some kinds of loops.
RFC6550 (RPL) might work well among satellites, or maybe not.

One would have a multitude of DODAGs, but if they are non-storing DODAGs,
then that doesn't mean that every satellite has any state.
I would run a DODAG of exit nodes, and run user traffic as an overlay.

--
]               Never tell me the odds!                 | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works        |    IoT architect   [
]     mcr@sandelman.ca  http://www.sandelman.ca/        |   ruby on rails    [


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2021-07-19 12:14 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.3.1625846401.13780.starlink@lists.bufferbloat.net>
2021-07-09 18:40 ` [Starlink] Starlink and bufferbloat status? David P. Reed
2021-07-09 18:45   ` Nathan Owens
2021-07-09 19:08   ` Ben Greear
2021-07-09 20:08   ` Dick Roy
2021-07-09 22:58   ` David Lang
2021-07-09 23:07     ` Daniel AJ Sokolov
2021-07-10 15:58       ` Dave Taht
2021-07-16 10:21     ` Wheelock, Ian
2021-07-16 17:08       ` David Lang
2021-07-16 17:13         ` Nathan Owens
2021-07-16 17:24           ` David Lang
2021-07-16 17:29             ` Nathan Owens
2021-07-16 17:31               ` Mike Puchol
2021-07-16 17:35                 ` Nathan Owens
2021-07-16 17:39                   ` Jonathan Bennett
2021-07-19  1:05                     ` Nick Buraglio
2021-07-19  1:20                       ` David Lang
2021-07-19  1:34                         ` Nick Buraglio
2021-07-17 18:36                   ` David P. Reed
2021-07-17 18:42                     ` David Lang
2021-07-18 19:05                     ` David Lang
2021-07-16 17:38                 ` David Lang
2021-07-16 17:42                   ` Mike Puchol
2021-07-16 18:48                     ` David Lang
2021-07-16 20:57                       ` Mike Puchol
2021-07-16 21:30                         ` David Lang
2021-07-16 21:40                           ` Mike Puchol
2021-07-16 22:40                             ` Jeremy Austin
2021-07-16 23:04                               ` Nathan Owens
2021-07-17 10:02                                 ` [Starlink] Free Space Optics - was " Michiel Leenaars
2021-07-17  1:12                             ` [Starlink] " David Lang
     [not found]                             ` <d86d6590b6f24dfa8f9775ed3bb3206c@DM6PR05MB5915.namprd05.prod.outlook.com>
2021-07-17 15:55                               ` Fabian E. Bustamante
2021-07-16 20:51             ` Michael Richardson
2021-07-18 19:17               ` David Lang
2021-07-18 22:29                 ` Dave Taht
2021-07-19  1:30                   ` David Lang
2021-07-19 12:14                     ` Michael Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox