[Starlink] Starlink and bufferbloat status?

Nick Buraglio buraglio at forwardingplane.net
Sun Jul 18 21:05:39 EDT 2021


We keep saying “route”. What do we actually mean from a network stack
perspective? Are we talking about relaying light / frames / electric or do
we mean actual packet routing, because there are obviously a lot of
important distinctions there.
I’m willing to bet that there is no routing (as in layer 3 packet routing)
at all except the Dish NAT all the way into their peering data center. The
ground stations are very likely RF to fiber wave division back to a carrier
hotel with no L3 buffering at all. That keeps latency very low (think O-E-O
and E-O transitions) and moves L3 buffering to two locations and keeps the
terrestrial network very easy to make redundant (optical protection, etc.).

nb

On Fri, Jul 16, 2021 at 12:39 PM Jonathan Bennett <
jonathanbennett at hackaday.com> wrote:

>
>
> On Fri, Jul 16, 2021, 12:35 PM Nathan Owens <nathan at nathan.io> wrote:
>
>> The other case where they could provide benefit is very long distance
>> paths --- NY to Tokyo, Johannesburg to London, etc... but presumably at
>> high cost, as the capacity will likely be much lower than submarine cables.
>>
>>>
> Or traffic between Starlink customers. A video call between me and someone
> else on the Starlink network is going to be drastically better if it can
> route over the sats.
>
>>
>>> On Fri, Jul 16, 2021 at 10:31 AM Mike Puchol <mike at starlink.sx> wrote:
>>
>>> Satellite optical links are useful to extend coverage to areas where you
>>> don’t have gateways - thus, they will introduce additional latency compared
>>> to two space segment hops (terminal to satellite -> satellite to gateway).
>>> If you have terminal to satellite, two optical hops, then final satellite
>>> to gateway, you will have more latency, not less.
>>>
>>> We are being “sold” optical links for what they are not IMHO.
>>>
>>> Best,
>>>
>>> Mike
>>> On Jul 16, 2021, 19:29 +0200, Nathan Owens <nathan at nathan.io>, wrote:
>>>
>>> > As there are more satellites, the up down time will get closer to
>>> 4-5ms rather then the ~7ms you list
>>>
>>> Possibly, if you do steering to always jump to the lowest latency
>>> satellite.
>>>
>>> > with laser relays in orbit, and terminal to terminal routing in orbit,
>>> there is the potential for the theoretical minimum to tend lower
>>> Maybe for certain users really in the middle of nowhere, but I did the
>>> best-case math for "bent pipe" in Seattle area, which is as good as it gets.
>>>
>>> On Fri, Jul 16, 2021 at 10:24 AM David Lang <david at lang.hm> wrote:
>>>
>>>> hey, it's a good attitude to have :-)
>>>>
>>>> Elon tends to set 'impossible' goals, miss the timeline a bit, and come
>>>> very
>>>> close to the goal, if not exceed it.
>>>>
>>>> As there are more staellites, the up down time will get closer to 4-5ms
>>>> rather
>>>> then the ~7ms you list, and with laser relays in orbit, and terminal to
>>>> terminal
>>>> routing in orbit, there is the potential for the theoretical minimum to
>>>> tend
>>>> lower, giving some headroom for other overhead but still being in the
>>>> 20ms
>>>> range.
>>>>
>>>> David Lang
>>>>
>>>>   On Fri, 16 Jul 2021, Nathan Owens wrote:
>>>>
>>>> > Elon said "foolish packet routing" for things over 20ms! Which seems
>>>> crazy
>>>> > if you do some basic math:
>>>> >
>>>> >   - Sat to User Terminal distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - Sat to GW distance: 550-950km air/vacuum: 1.9 - 3.3ms
>>>> >   - GW to PoP Distance: 50-800km fiber: 0.25 - 4ms
>>>> >   - PoP to Internet Distance: 50km fiber: 0.25 - 0.5ms
>>>> >   - Total one-way delay: 4.3 - 11.1ms
>>>> >   - Theoretical minimum RTT: 8.6ms - 22.2ms, call it 15.4ms
>>>> >
>>>> > This includes no transmission delay, queuing delay,
>>>> > processing/fragmentation/reassembly/etc, and no time-division
>>>> multiplexing.
>>>> >
>>>> > On Fri, Jul 16, 2021 at 10:09 AM David Lang <david at lang.hm> wrote:
>>>> >
>>>> >> I think it depends on if you are looking at datacenter-to-datacenter
>>>> >> latency of
>>>> >> home to remote datacenter latency :-)
>>>> >>
>>>> >> my rule of thumb for cross US ping time has been 80-100ms latency
>>>> (but
>>>> >> it's been
>>>> >> a few years since I tested it).
>>>> >>
>>>> >> I note that an article I saw today said that Elon is saying that
>>>> latency
>>>> >> will
>>>> >> improve significantly in the near future, that up/down latency is
>>>> ~20ms
>>>> >> and the
>>>> >> additional delays pushing it to the 80ms range are 'stupid packet
>>>> routing'
>>>> >> problems that they are working on.
>>>> >>
>>>> >> If they are still in that level of optimization, it doesn't surprise
>>>> me
>>>> >> that
>>>> >> they haven't really focused on the bufferbloat issue, they have more
>>>> >> obvious
>>>> >> stuff to fix first.
>>>> >>
>>>> >> David Lang
>>>> >>
>>>> >>
>>>> >>   On Fri, 16 Jul 2021, Wheelock, Ian wrote:
>>>> >>
>>>> >>> Date: Fri, 16 Jul 2021 10:21:52 +0000
>>>> >>> From: "Wheelock, Ian" <ian.wheelock at commscope.com>
>>>> >>> To: David Lang <david at lang.hm>, David P. Reed <dpreed at deepplum.com>
>>>> >>> Cc: "starlink at lists.bufferbloat.net" <
>>>> starlink at lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> Hi David
>>>> >>> In terms of the Latency that David (Reed) mentioned for California
>>>> to
>>>> >> Massachusetts of about 17ms over the public internet, it seems a bit
>>>> faster
>>>> >> than what I would expect. My own traceroute via my VDSL link shows
>>>> 14ms
>>>> >> just to get out of the operator network.
>>>> >>>
>>>> >>> https://www.wondernetwork.com  is a handy tool for checking
>>>> geographic
>>>> >> ping perf between cities, and it shows a min of about 66ms for pings
>>>> >> between Boston and San Diego
>>>> >> https://wondernetwork.com/pings/boston/San%20Diego (so about 33ms
>>>> for
>>>> >> 1-way transfer).
>>>> >>>
>>>> >>> Distance wise this is about 4,100 KM (2,500 M), and @2/3 speed of
>>>> light
>>>> >> (through a pure fibre link of that distance) the propagation time is
>>>> just
>>>> >> over 20ms. If the network equipment between the Boston and San Diego
>>>> is
>>>> >> factored in, with some buffering along the way, 33ms does seem quite
>>>> >> reasonable over the 20ms for speed of light in fibre for that 1-way
>>>> transfer
>>>> >>>
>>>> >>> -Ian Wheelock
>>>> >>>
>>>> >>> From: Starlink <starlink-bounces at lists.bufferbloat.net> on behalf
>>>> of
>>>> >> David Lang <david at lang.hm>
>>>> >>> Date: Friday 9 July 2021 at 23:59
>>>> >>> To: "David P. Reed" <dpreed at deepplum.com>
>>>> >>> Cc: "starlink at lists.bufferbloat.net" <
>>>> starlink at lists.bufferbloat.net>
>>>> >>> Subject: Re: [Starlink] Starlink and bufferbloat status?
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and Musk was predicting <40ms. roughly competitive with
>>>> landlines,
>>>> >> and worlds better than geostationary satellite (and many
>>>> >>> External (mailto:david at lang.hm)
>>>> >>>
>>>> >>
>>>> https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL2lhbi53aGVlbG9ja0Bjb21tc2NvcGUuY29tL2I1MzFjZDA4OTZmMWI0Yzc5NzdiOTIzNmY3MTAzM2MxLzE2MjU4NzE1NDkuNjU=#key=19e8545676e28e577c813de83a4cf1dc
>>>> >>  https://www.inky.com/banner-faq/  https://www.inky.com
>>>> >>>
>>>> >>> IIRC, the definition of 'low latency' for the FCC was something like
>>>> >> 100ms, and
>>>> >>> Musk was predicting <40ms.
>>>> >>>
>>>> >>> roughly competitive with landlines, and worlds better than
>>>> geostationary
>>>> >>> satellite (and many wireless ISPs)
>>>> >>>
>>>> >>> but when doing any serious testing of latency, you need to be wired
>>>> to
>>>> >> the
>>>> >>> router, wifi introduces so much variability that it swamps the
>>>> signal.
>>>> >>>
>>>> >>> David Lang
>>>> >>>
>>>> >>> On Fri, 9 Jul 2021, David P. Reed wrote:
>>>> >>>
>>>> >>>> Date: Fri, 9 Jul 2021 14:40:01 -0400 (EDT)
>>>> >>>> From: David P. Reed <dpreed at deepplum.com>
>>>> >>>> To: starlink at lists.bufferbloat.net
>>>> >>>> Subject: [Starlink] Starlink and bufferbloat status?
>>>> >>>>
>>>> >>>>
>>>> >>>> Early measurements of performance of Starlink have shown
>>>> significant
>>>> >> bufferbloat, as Dave Taht has shown.
>>>> >>>>
>>>> >>>> But...  Starlink is a moving target. The bufferbloat isn't a
>>>> hardware
>>>> >> issue, it should be completely manageable, starting by simple
>>>> firmware
>>>> >> changes inside the Starlink system itself. For example, implementing
>>>> >> fq_codel so that bottleneck links just drop packets according to the
>>>> Best
>>>> >> Practices RFC,
>>>> >>>>
>>>> >>>> So I'm hoping this has improved since Dave's measurements. How
>>>> much has
>>>> >> it improved? What's the current maximum packet latency under full
>>>> >> load,  Ive heard anecdotally that a friend of a friend gets 84 msec.
>>>> *ping
>>>> >> times under full load*, but he wasn't using flent or some other
>>>> measurement
>>>> >> tool of good quality that gives a true number.
>>>> >>>>
>>>> >>>> 84 msec is not great - it's marginal for Zoom quality experience
>>>> (you
>>>> >> want latencies significantly less than 100 msec. as a rule of thumb
>>>> for
>>>> >> teleconferencing quality). But it is better than Dave's measurements
>>>> showed.
>>>> >>>>
>>>> >>>> Now Musk bragged that his network was "low latency" unlike other
>>>> high
>>>> >> speed services, which means low end-to-end latency.  That got him
>>>> >> permission from the FCC to operate Starlink at all. His number was, I
>>>> >> think, < 5 msec. 84 is a lot more than 5. (I didn't believe 5,
>>>> because he
>>>> >> probably meant just the time from the ground station to the terminal
>>>> >> through the satellite. But I regularly get 17 msec. between
>>>> California and
>>>> >> Massachusetts over the public Internet)
>>>> >>>>
>>>> >>>> So 84 might be the current status. That would mean that someone at
>>>> >> Srarlink might be paying some attention, but it is a long way from
>>>> what
>>>> >> Musk implied.
>>>> >>>>
>>>> >>>>
>>>> >>>> PS: I forget the number of the RFC, but the number of packets
>>>> queued on
>>>> >> an egress link should be chosen by taking the hardware bottleneck
>>>> >> throughput of any path, combined with an end-to-end Internet
>>>> underlying
>>>> >> delay of about 10 msec. to account for hops between source and
>>>> destination.
>>>> >> Lets say Starlink allocates 50 Mb/sec to each customer, packets are
>>>> limited
>>>> >> to 10,000 bits (1500 * 8), so the outbound queues should be limited
>>>> to
>>>> >> about 0.01 * 50,000,000 / 10,000, which comes out to about 250
>>>> packets from
>>>> >> each terminal of buffering, total, in the path from terminal to
>>>> public
>>>> >> Internet, assuming the connection to the public Internet is not a
>>>> problem.
>>>> >>> _______________________________________________
>>>> >>> Starlink mailing list
>>>> >>> Starlink at lists.bufferbloat.net
>>>> >>>
>>>> >>
>>>> https://secure-web.cisco.com/1sNc_-1HhGCW7xdirt_lAoAy5Nn5T6UA85Scjn5BR7QHXtumhrf6RKn78SuRJG7DUKI3duggU9g6hJKW-Ze07HTczYqB9mBpIeALqk5drQ7nMvM8K7JbWfUbPR7JSNrI75UjiNXQk0wslBfoOTvkMlRj5eMOZhps7DMGBRQTVAeTd5vwXoQtDgS6zLCcJkrcO2S9MRSCC4f1I17SzgQJIwqo3LEwuN6lD-pkX0MFLqGr2zzsHw5eapd-VBlHu5reC4-OEn2zHkb7HNzS1pcueF6tsUE1vFRsWs2SIOwU5MvbKe3J3Q6NRQ40cHI1AGd-i/https://lists.bufferbloat.net/listinfo/starlink
>>>> >>>
>>>> >>> _______________________________________________
>>>> >> Starlink mailing list
>>>> >> Starlink at lists.bufferbloat.net
>>>> >> https://lists.bufferbloat.net/listinfo/starlink
>>>> >>
>>>> >
>>>>
>>> _______________________________________________
>>> Starlink mailing list
>>> Starlink at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/starlink
>>>
>>> _______________________________________________
>> Starlink mailing list
>> Starlink at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>>
> _______________________________________________
> Starlink mailing list
> Starlink at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/starlink/attachments/20210718/d50eb1cf/attachment.html>


More information about the Starlink mailing list