Starlink has bufferbloat. Bad.
 help / color / mirror / Atom feed
From: Ulrich Speidel <u.speidel@auckland.ac.nz>
To: J Pan <Pan@uvic.ca>
Cc: starlink@lists.bufferbloat.net
Subject: [Starlink] Re: [LibreQoS] Re: Keynote: QoE/QoS - Bandwidth Is
Date: Mon, 10 Nov 2025 08:51:02 +1300	[thread overview]
Message-ID: <cd84c98e-19be-4c7d-8627-235f1cfac1f8@auckland.ac.nz> (raw)
In-Reply-To: <CAHn=e4iN0E4=s6c=1tQfu3W1687DrSbadq8SEEwBpNJo4O=xzA@mail.gmail.com>

But what does bicast mean? It means sending a packet to both queues - 
the one for sat A and the one for sat B, right?

Let's assume that time of handover is when Dishy switches its beam over. 
Then for a packet from sat A to reach it, that packet needs to make it 
to the front of the TX queue for sat A a minimum of L_a ms before 
handover time to allow for transit delay to Dishy.

Then we have the following possible scenarios:

  * The packet reaches the front of the queue for sat A before the
    handover and gets transmitted to sat A, and from there to Dishy.
    Then the packet that's in the queue for sat B is surplus to
    requirements. But does it get removed? If not, it needlessly takes
    up capacity needlessly (=introduces delay) on sat B. But that could
    be solved I guess, as long as handover is to/from the same gateway.
  * The packet is still in both queues by the time of handover. Then we
    don't get packet loss, but the packet in queue for sat A is now
    also surplus to requirements. But does it get removed? If not, it
    needlessly takes up capacity on sat B.
  * The packet reaches the front of the queue for sat B more than L_a
    ms before the handover, but there's no client there yet to transmit
    it to. What does Starlink do with that packet? If it drops the
    packet, it risks that the packet in queue for sat A won't make it to
    the front of that queue in time.

The list isn't complete but it shows that bicast alone isn't a fix.

You could however look at the expected queue sojourn time in each queue 
as well as the expected propagation delay to Dishy and then shove the 
packet into the queue that it needs to be in. But maybe that's not 
happening yet?

On 9/11/2025 6:08 pm, J Pan wrote:
> starlink can use bicast to reduce handover loss, especially for the
> downlink to the user terminal, but there are still handover delay
> spike and packet loss, especially for the uplink as well
> --
> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA, Web.UVic.CA/~pan
>
> On Sat, Nov 8, 2025 at 5:04 PM Ulrich Speidel via Starlink
> <starlink@lists.bufferbloat.net> wrote:
>> Starlink is a bit unusual in terms of its packet loss mechanism.
>>
>> In "normal" networks, we have damaged packets (low SNR or interference
>> leading to symbol errors in the modulation that then translate into bit
>> errors and failing checksums), queue drops as part of the normal
>> congestion control behaviour, and - not sure how much of a problem this
>> still is these days - receive window overflows.
>>
>> In the case of Starlink and handovers, there appears to be an uplink
>> queue for uplink to each satellite on the gateway side of things.
>> Suppose your dishy talks to sat A first and then gets handed over to sat
>> B. Up until handover time, packets heading to your dishy get sent to the
>> queue for sat A, thereafter to the queue for sat B. These queues appear
>> to be different and - with different packets and different volume in
>> them - seem to lead to situations when a packet addressed to you that's
>> been enqueued for A hasn't reached the front of the queue by handover
>> time to B. As A now no longer talks to you, this packet is now on a road
>> to nowhere.
>>
>> Of course, this loss only ever happens at handover time. No such thing
>> as a uniform distribution here. Moreover, it's NOT a loss that should be
>> interpreted as a congestion control signal to a TCP sender (or whatever
>> other protocol is trying to congestion control itself, e.g., some QUIC
>> implementation or somesuch).
>>
>> On the latency vs. bandwidth debate: To me, this seems a bit like
>> arguing whether width or height are more important for area. Both
>> matter, and both are equally badly understood by marketing folk.
>>
>> If your video needs to stream X Mb/s to the viewer, then the channel
>> between the sender and receiver needs B > X Mb/s of available and
>> accessible bandwidth. That's why bandwidth matters. Basta.
>>
>> The "accessible" part of the bandwidth is where the latency comes in.
>> Remember the goal is to hit bandwidth-delay product on the lowest
>> bandwidth link along the forwarding chain of your packets, i.e., to make
>> sure you have that bit of pipe filled. It's a lofty goal because
>> whatever you do in terms of increasing or decreasing your sending rate
>> only has an effect some time later, by which time the conditions you're
>> trying to respond to may have changed. Your picture of the current
>> conditions is always outdated (and more outdated the longer the
>> latency), and just to add insult to injury, the conditions (available
>> bandwidth, RTT) can also change without your control input. That change
>> is among others driven by algorithms that are possibly trying to fill a
>> different pipe, or has at the very least a different (time-shifted) view
>> of the conditions at the pipe you're trying to fill. This is a
>> fundamental control problem, not a matter of BBR over BIC or Reno or
>> Vegas or whatever you've tried on ns2.
>>
>> There are things that make this control problem a little easier: lower
>> latency obviously, but also exclusive links (or at least fewer competing
>> flows if you can't get exclusivity), fewer bandwidth bottlenecks along
>> your hops and bottlenecks that are more predictable, i.e., where the
>> input queues get processed at a constant processing rate. This is why
>> last mile fibre and Ethernet in your home are so nice (just because you
>> have your own WiFi router doesn't mean you have the channel to
>> yourself). It's also why CDNs are so popular: They combine low latency
>> with less competition for bandwidth and fewer hops.
>>
>> Starlink's challenge is that their bandwidth per cell is limited. Not
>> quite to the point where it's a problem everywhere, but certainly to the
>> point where it's becoming a problem in some places. If you're being
>> asked to pay a congestion surcharge when you sign up, you're in one of
>> those places. Moving to fq_codel and bringing down the latency has made
>> more of that bandwidth accessible, but again there's a limit to what you
>> can do there - it's not like reducing latency increases total bandwidth.
>>
>> On 9/11/2025 8:57 am, J Pan via Starlink wrote:
>>> if there are no obstructions, starlink nowadays can achieve 1% packet
>>> loss, often 0.1%, mostly due to satellite handover, so for the netflix
>>> open connect appliances inside starlink pop, they can do some
>>> starlink-specific optimization to mask away the handover for starlink
>>> users, e.g.,https://oac.uvic.ca/starlink/wp-content/uploads/sites/8876/2024/09/leonet24-victor.pdf
>>> --
>>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA, Web.UVic.CA/~pan
>>>
>>> On Sat, Nov 8, 2025 at 11:12 AM David Fernández<davidfdzp@gmail.com> wrote:
>>>> "making the link layer more reliable is "the way"", indeed. In the wireless data link design projects I have participated, it has always been the requirement to have packet loss rate (PLR) at IP layer due to physical layer errors uniformly distributed and not higher than 0.1%, for TCP, while for wireless links carrying only VoIP, using RTP and G.729 codec, up to 1% PLR is acceptable.
>>>>
>>>> Then, I have seen that Starlink routinely exceeds 1% PLR, reaching up to 6% or more even, but TCP works anyway, somehow. It may be that CUBIC and BBR are more robust to higher packet loss rates due to error than Reno.
>>>>
>>>> "now physical, link and network layer have more info for
>>>> the transport layer to make the right decision", but do they? I mean, how is the transport layer getting info nowadays about the path characteristics in terms of bandwidth, latency and packet losses from the layers below or info like the one you mentioned about Starlink handovers? I still have not seen this happening, have you?
>>>>
>>>> Regards,
>>>>
>>>> David
>>>>
>>>> On Sat, 8 Nov 2025, 22:30 J Pan,<Pan@uvic.ca> wrote:
>>>>> yes, end-to-end congestion control was an add-on to tcp flow and error
>>>>> control, and at that time, packet loss was the only reliable
>>>>> congestion signal without router collaboration, and the  legacy stays.
>>>>> from the experience of tuning tcp on cellular networks, making the
>>>>> link layer more reliable is "the way", at the cost of more buffers and
>>>>> latency. but now physical, link and network layer have more info for
>>>>> the transport layer to make the right decision, e.g., starlink
>>>>> handovers at 12th, 27th, 42nd and 57th second of every minute with
>>>>> delay spike and losses
>>>>> --
>>>>> J Pan, UVic CSc, ECS566, 250-472-5796 (NO VM),Pan@UVic.CA, Web.UVic.CA/~pan
>>>>>
>>>>> On Fri, Nov 7, 2025 at 2:51 PM David Fernández via Starlink
>>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>>> Thanks for sharing this Frank.
>>>>>>
>>>>>> In slide 3, I think that another effect not to be missed is packet losses
>>>>>> due to errors, which could be analogous to pipe leaks. Sometimes, it
>>>>>> happens that they are not negligible, in some cases with wireless links,
>>>>>> mainly, but it could happen too in DSL. I remember that I had a DSL line in
>>>>>> which the router had the option to disable interleaving, warning that you
>>>>>> could get more errors, bad for watching video, they were saying, but
>>>>>> reduced latency (good for videogames). When packet losses due to errors are
>>>>>> misinterpreted as congestion by the transport protocols, the result is also
>>>>>> a band quality of experience.
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> David
>>>>>>
>>>>>> Date: Fri, 7 Nov 2025 11:53:44 +0100
>>>>>>> From: Frantisek Borsik<frantisek.borsik@gmail.com>
>>>>>>> Subject: [Starlink] Re: [LibreQoS] Re: Keynote: QoE/QoS - Bandwidth Is
>>>>>>>           A Lie!  at WISPAPALOOZA 2025 (October 16)
>>>>>>> To: Cake List<cake@lists.bufferbloat.net>, bloat
>>>>>>>           <bloat@lists.bufferbloat.net>,codel@lists.bufferbloat.net,
>>>>>>>           Jeremy Austin via Rpm<rpm@lists.bufferbloat.net>, libreqos
>>>>>>>           <libreqos@lists.bufferbloat.net>,   Dave Taht via Starlink
>>>>>>>           <starlink@lists.bufferbloat.net>,l4s-discuss@ietf.org
>>>>>>> Message-ID:
>>>>>>>           <CAJUtOOhzJAymDiKsnRPho80B8GZ4wzd8W7FWccS=
>>>>>>> uhiQPd3KOg@mail.gmail.com>
>>>>>>> Content-Type: text/plain; charset="UTF-8"
>>>>>>>
>>>>>>> Hello to all,
>>>>>>>
>>>>>>> Recording of our QoE/QoS panel discussion is out! It was really great and
>>>>>>> believe you will like it:
>>>>>>>
>>>>>>> https://www.youtube.com/watch?v=T1VET0VYQ6c
>>>>>>>
>>>>>>> We have touch bandwidth, L4S, Starlink and more.
>>>>>>>
>>>>>>> Here are the slides with additional reading:
>>>>>>>
>>>>>>> https://docs.google.com/presentation/d/1ML0I3Av3DCtQDiP8Djr_YGH2r4-UDZP25VEk-xyJcZE/edit?slide=id.p#slide=id.p
>>>>>>>
>>>>>>> We hope to continue this conversation into more practical, demo-like
>>>>>>> environment of sort, that we can see at IETF Hackathon and used to see in
>>>>>>> the early WISPA event days, with Animal Farm.
>>>>>>>
>>>>>>>
>>>>>>> All the best,
>>>>>>>
>>>>>>> Frank
>>>>>>>
>>>>>>> Frantisek (Frank) Borsik
>>>>>>>
>>>>>>>
>>>>>>> *In loving memory of Dave Täht: *1965-2025
>>>>>>>
>>>>>>> https://libreqos.io/2025/04/01/in-loving-memory-of-dave/
>>>>>>>
>>>>>>>
>>>>>>> https://www.linkedin.com/in/frantisekborsik
>>>>>>>
>>>>>>> Signal, Telegram, WhatsApp: +421919416714
>>>>>>>
>>>>>>> iMessage, mobile: +420775230885
>>>>>>>
>>>>>>> Skype: casioa5302ca
>>>>>>>
>>>>>>> frantisek.borsik@gmail.com
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Oct 1, 2025 at 11:32 PM Frantisek Borsik <
>>>>>>> frantisek.borsik@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Let's say that I love it, channeling my inner Dave Taht. But there were a
>>>>>>>> couple of voices asking if I won't consider to change it a bit, to be
>>>>>>> "less
>>>>>>>> hostile" to our "bandwidth is king!" friends...and I was trying, but this
>>>>>>>> was really sticky and I'm happy that it stayed this way.
>>>>>>>>
>>>>>>>>
>>>>>>>> All the best,
>>>>>>>>
>>>>>>>> Frank
>>>>>>>>
>>>>>>>> Frantisek (Frank) Borsik
>>>>>>>>
>>>>>>>>
>>>>>>>> *In loving memory of Dave Täht: *1965-2025
>>>>>>>>
>>>>>>>> https://libreqos.io/2025/04/01/in-loving-memory-of-dave/
>>>>>>>>
>>>>>>>>
>>>>>>>> https://www.linkedin.com/in/frantisekborsik
>>>>>>>>
>>>>>>>> Signal, Telegram, WhatsApp: +421919416714
>>>>>>>>
>>>>>>>> iMessage, mobile: +420775230885
>>>>>>>>
>>>>>>>> Skype: casioa5302ca
>>>>>>>>
>>>>>>>> frantisek.borsik@gmail.com
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Oct 1, 2025 at 9:25 PM dan<dandenson@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I actually really like the title ;)
>>>>>>>>>
>>>>>>>>> It's that most of the time people are told they need more bandwidth to
>>>>>>>>> solve a problem, when they really need lower latency and jitter.  So the
>>>>>>>>> vast majority of the time 'more bandwidth' as a solution really is a
>>>>>>> lie.
>>>>>>>>> On Tue, Sep 30, 2025 at 2:47 PM Frantisek Borsik via LibreQoS <
>>>>>>>>> libreqos@lists.bufferbloat.net> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks, Jim. Well, true that - but I wanted to do it either way,
>>>>>>> because
>>>>>>>>>> of
>>>>>>>>>> our dear Dave and - as a conversation starter.
>>>>>>>>>> As @Jason Livingood<jason_livingood@comcast.com> said - "Bandwidth is
>>>>>>>>>> dead. Long live latency."
>>>>>>>>>>
>>>>>>>>>>
>>>>>>> https://pulse.internetsociety.org/blog/bandwidth-is-dead-long-live-latency
>>>>>>>>>> I will do my best to get the audio/video right and to share it with you
>>>>>>>>>> all.
>>>>>>>>>>
>>>>>>>>>> PS: Sending you separate email.
>>>>>>>>>>
>>>>>>>>>> All the best,
>>>>>>>>>>
>>>>>>>>>> Frank
>>>>>>>>>>
>>>>>>>>>> Frantisek (Frank) Borsik
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *In loving memory of Dave Täht: *1965-2025
>>>>>>>>>>
>>>>>>>>>> https://libreqos.io/2025/04/01/in-loving-memory-of-dave/
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> https://www.linkedin.com/in/frantisekborsik
>>>>>>>>>>
>>>>>>>>>> Signal, Telegram, WhatsApp: +421919416714
>>>>>>>>>>
>>>>>>>>>> iMessage, mobile: +420775230885
>>>>>>>>>>
>>>>>>>>>> Skype: casioa5302ca
>>>>>>>>>>
>>>>>>>>>> frantisek.borsik@gmail.com
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Sep 30, 2025 at 10:25 PM James Forster <
>>>>>>> jim@connectivitycap.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Wow, that’s fantastic, Frantisek!  Great work making this happen.
>>>>>>>>>>>
>>>>>>>>>>> These sort of titles aren’t my favorite. I think I understand the
>>>>>>>>>>> sentiment but find the issues more nuanced than that. :-)
>>>>>>>>>>>
>>>>>>>>>>> If you can get clear audio, not much quality is needed for panels and
>>>>>>>>>>> talking beads.   Best would be a feed right into an iPhone/android.
>>>>>>>>>>>
>>>>>>>>>>> Jim
>>>>>>>>>> _______________________________________________
>>>>>>>>>> LibreQoS mailing list --libreqos@lists.bufferbloat.net
>>>>>>>>>> To unsubscribe send an email tolibreqos-leave@lists.bufferbloat.net
>>>>>>>>>>
>>>>>> _______________________________________________
>>>>>> Starlink mailing list --starlink@lists.bufferbloat.net
>>>>>> To unsubscribe send an email tostarlink-leave@lists.bufferbloat.net
>>> _______________________________________________
>>> Starlink mailing list --starlink@lists.bufferbloat.net
>>> To unsubscribe send an email tostarlink-leave@lists.bufferbloat.net
>> --
>> ****************************************************************
>> Dr. Ulrich Speidel
>>
>> School of Computer Science
>>
>> Room 303S.594 (City Campus)
>>
>> The University of Auckland
>> u.speidel@auckland.ac.nz
>> http://www.cs.auckland.ac.nz/~ulrich/
>> ****************************************************************
>>
>>
>>
>> _______________________________________________
>> Starlink mailing list --starlink@lists.bufferbloat.net
>> To unsubscribe send an email tostarlink-leave@lists.bufferbloat.net

-- 
****************************************************************
Dr. Ulrich Speidel

School of Computer Science

Room 303S.594 (City Campus)

The University of Auckland
u.speidel@auckland.ac.nz 
http://www.cs.auckland.ac.nz/~ulrich/
****************************************************************



  reply	other threads:[~2025-11-09 19:51 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <176254173597.1347.15997824594759319437@gauss>
2025-11-07 22:50 ` [Starlink] Re: Starlink Digest, Vol 55, Issue 3 David Fernández
2025-11-08 17:30   ` J Pan
2025-11-08 19:12     ` [Starlink] Re: [LibreQoS] Re: Keynote: QoE/QoS - Bandwidth Is David Fernández
2025-11-08 19:57       ` J Pan
2025-11-09  1:03         ` Ulrich Speidel
2025-11-09  5:08           ` J Pan
2025-11-09 19:51             ` Ulrich Speidel [this message]
2025-11-09 21:15               ` J Pan
2025-11-09 18:42           ` Sebastian Moeller
2025-11-09 20:08             ` Ulrich Speidel
2025-11-10  6:43               ` Sebastian Moeller
2025-11-09 15:44         ` David Fernández
2025-11-09 17:53           ` J Pan
2025-11-08 19:29     ` [Starlink] Re: Starlink Digest, Vol 55, Issue 3 Sebastian Moeller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/starlink.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cd84c98e-19be-4c7d-8627-235f1cfac1f8@auckland.ac.nz \
    --to=u.speidel@auckland.ac.nz \
    --cc=Pan@uvic.ca \
    --cc=starlink@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox