From: Sebastian Moeller <moeller0@gmx.de>
To: "David P. Reed" <dpreed@deepplum.com>
Cc: starlink@lists.bufferbloat.net
Subject: [Starlink] Re: Starlink Digest, Vol 53, Issue 14
Date: Sun, 28 Sep 2025 12:47:06 +0200 [thread overview]
Message-ID: <2F8F8566-5C5E-4D96-AA48-90313FD33996@gmx.de> (raw)
In-Reply-To: <1759003996.780613387@apps.rackspace.com>
Hi David,
> On 27. Sep 2025, at 22:13, David P. Reed via Starlink <starlink@lists.bufferbloat.net> wrote:
>
>
>
> On Saturday, September 27, 2025 02:01, starlink-request@lists.bufferbloat.net said:
>
>
>
>> Date: Fri, 26 Sep 2025 15:11:23 +0200
>> From: Sebastian Moeller <moeller0@gmx.de>
>> Subject: [Starlink] Re: Starlink looking less niche as its retail
>> presence expands
>> To: Michael Richardson <mcr@sandelman.ca>
>> Cc: starlink@lists.bufferbloat.net
>> Message-ID: <F636F14C-3D51-4D81-A82B-3947554DE2AB@gmx.de>
>> Content-Type: text/plain; charset=us-ascii
>>
>>
>>
>>> On 25. Sep 2025, at 19:45, Michael Richardson via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>>>
>>> {resending without signature, since new list can't cope with attachments
>> yet}
>>>
>>> Luis A. Cornejo via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>> Since Starlink controls all the wireless parts of their system. Does
>>>> anybody here know what they could do to mitigate the limits of
>>>> classical wireless comms, like Shannon-Hartley Capacity Theorem or the
>>>> interference?
>>>
>>> I don't know much about this part.
>>> I am kinda hijacking this thread, but I think there is a connection.
>>> Dr.Pan gave a talk about Starlink measurements last week in Ottawa.
>>> (The time slot was way too short. Very nice talk)
>>>
>>> I was thinking about the many places where bandwidth can go up and down,
>> both
>>> for Starlink's various mis-attachment situations, but also for OneWeb's
>> Polar
>>> orbit mechanism. (I didn't know it was doing that).
>>> And just getting redirected to a different downlink/base-station, and then
>>> have to cross over Starlink's internal network to the same exit point.
>>> (Too bad MobileIP never took off)
>>>
>>> I think the only thing worse than bufferbloat is varying bandwidth rates.
>>> That's because the only way to use that bandwidth is to introduce bufferbloat
>> :-)
>>> It was the cablemodem burst mechanism that clued Jim into bufferbloat.
>>>
>>> So my related question is, if they could mitigate, they likely can't do it
>>> continuously, so things will up/down. The IETF now has a SCONE WG, with the
>>> aim of inserting signals into QUIC traffic about bandwidth available.
>>> Yes, meddling by middle boxes. Ick.
>>
>> Regarding SCONE:
>> "The Standard Communication with Network Elements (SCONE) protocol is
>> negotiated by QUIC endpoints. This protocol provides a means for
>> network elements to signal the maximum available sustained
>> throughput, or rate limits, for flows of UDP datagrams that transit
>> that network element to a QUIC endpoint."
>
> Absolutely agree. Trying to have the network elements define a "steady state" for all entities communicating over a path in the "near future" puts the control in the wrong place. (Well, many at IETF are "net-heads" who think they should tell applications what they "need", and view the network elements as a consortium run by all-powerful communications operators like, say, the PTTs and the Comcasts).
>
> The best (most accurate) signal is still a "dropped packet" that results from queue overflow. And it tells you only that the source sent too much.
I do have some sympathy of the network giving some help to the end-nodes detecting the "sent too much" state faster than drop. But if push comes to shove that drop is essentially the only "reliable" signal...
>
> There are no "guarantees" unless you run a central scheduler that takes all demand known to happen in the next period P (after the current operation period P-1), and distribute a "fair share" (according to a committee of network operators) among all users, then not admitting packets during the period P, except for those allocated.
In this long enough to at least have an opinion ;) : IMHO there are three options for "fairness":
a) instructed fairness, where there needs to be some veridical (preferably out of band) way to signal desired capacity share of different aggregates (and how to detect these aggregates)
b) emergent fairness, where the network uses some unambiguous aggregate identifier (e.g. 2-tupel, 4-tupel. 5-tubel from the IP header) and foregoes the need for a reliable timely side-channel to signal relative importance of packets/flows
c) do not bother at all
Status quo is IMHO c), b) has been shown to be both achievable (at least at leaf networks) and to offer a noticeable improvement over c) for many use-cases, but everybody and their dog obsesses about a) in spite of this in practice only ever working in well controlled niches...
>
> In my view this kind of network-control is the opposite of good Internet design. It's almost as bad as having ChatGPT write design documents - pure artificial bullshittery.
;)
>
> There is an end-to-end style approach that suggests that there is no steady state end-to-end requirement at all - just the most fair sharing of the achievable "responsiveness" or "low latency" of the end-to-end applications.
+1
>
> To achieve that, "throughput" matters very little. Queueing delay is what matters.
+1
>
> The current Internet tries to make ALL queues as close to zero in length as possible. It doesn't succeed, of course - it can't predict what "end users" will try to start and when.
With an oracle scheduler all problems would go away :)
> There's no "statistical predictor" of short-term demand that works (yeah, you all took classes that said "assume an average load of X", and never asked "why assume that?" I did and the answer is, that's not reality or close to it).
>
> So the best answer of SCONE is "the maximum you can expect" has a denominator of the number of possible flows that can go through that network element, and a numerator of some fixed number.
>
> And by Little's Lemma, the latency you will get, if you fully allocate it, is "infinity seconds".
BTW, I do see some use of SCONE's maximum the path will ever sustain information, but that is so extremely niche, that the IETF should not bother...
In cake-autorate (and similar approaches like purple-aimd https://conference.apnic.net/60/assets/presentation-files/f6110bfa-918f-430a-95d6-60a4524d62cf.pdf) were we try to adapt a traffic shaper to path capacity* having that information available would offer a small optimisation avoiding trying to increase the shaper above that value... however we would need that signal not per flow and we would need to be able to trust it. And there it essentially stops, unless an ISP is using that signal for a load-bearing part of its own infrastructure there is little reason this will ever be trustworthy over the open internet... it might work for internal use-cases within corporate networks....
*) Running full steam ahead into a variant " no "statistical predictor" of short-term ..." problem you describes, aiming not for a theoretically perfect solution, but just for something better than doing nothing at all.
Regards
Sebastian
>
>
>
>
>>
>> That is going nowhere productive...
>> a) restricting this to QUIC is fine only if you believe that QUIC will take over
>> all traffic soon (keep in mind what we expected for IPv6)
>> b) "signal the maximum available sustained throughput" on a shared network like
>> the internet has a simple true answer "0B"...
>>
>> IMHO what we will end up doing, after exhaustively attempting and failing with
>> more ambitious schemes like SCONE is tp collect something like the maximum of
>> current capacity use in percent of all nodes along a path and have the endpoints
>> use this to track changes in left over capacity and try to control their rates
>> accordingly. That is better rate-control that is still driven by the endpoints.
>>
>>
>>> Could Starlink even do this given the lack of L3 processing along the
>>> entire link? At least according to Dr. Pan's diagrams.
>>> (An L2 hop could well mess with packets too).
>>> Ideally, one or more of the satellites involved in the ISL would
>>> know what the current bandwidth to a given terminal is, and could inform the
>>> end system.
>>>
>>> The two questions:
>>> 1. are the limits/conditions stable enough for long enough that the
>> available
>>> bandwidth could be communicated back to the uplink?
>>
>> "long enough" is a relative term... but sure if the latency/"frequency of
>> signalling" is substantially slower than the expected capacity fluctuations then
>> this will not gain much, but if enough of those fluctuations are "slow" compared
>> the RTT of a flow this scheme can have overall beneficial effects.
>>
>>>
>>> 2. assuming, yes, what would the best place to do the SCONE marking?
>>
>> In our dreams.... in reality instead use the kind of marking that has already been
>> shown to work in the real life... if I sound disillusioned about SCONE it is
>> because I am, we knew even before L4S was ratified, that it is "too little, too
>> late" and again instead of doing the proven thing IETF contemplates another
>> academic proposal. I wonder who has the actual problem of a missing advisory
>> signal that SCONE offers to solve.
>>
>>>
>>>>> Let's recap: Spectrum's boxed in, and power is boxed in. That
>> imposes
>>>>> a hard limit on total capacity (look up the Shannon-Hartley Capacity
>>>>> Theorem if you don't believe me). This capacity is all that Starlink
>>>>> has to share among its users in a cell. No matter how many
>> satellites
>>>>> they launch or how big the rocket. Add more users in a cell, and the
>>>>> capacity per user there has to go down. Law of nature.
>>>
>>> And users will need to know what they have on a minute-by-minute basis so
>>> that they avoid screwing themselves, let alone their neighbours.
>>
>> That is what feed-back-based rate-control and some modicum of healthy (transient
>> shock-absorber-style) buffering is for, no?
>>
>>> Packets going up the link, then being dropped, is just wasted.
>>
>> Sure, if we veridically knew which packts will get dropped, we could avoid sending
>> them in the fist place ;)
>>
>>
>>>
>>> ps:
>>> I have been watching:
>> https://www.youtube.com/playlist?list=PL-_93BVApb58SXL-BCv4rVHL-8GuC2WGb
>>> where they have powered up 50+ year old Apollo Transponders.
>>>
>>> --
>>> ] Never tell me the odds! | ipv6 mesh networks
>> [
>>> ] Michael Richardson, Sandelman Software Works | IoT architect
>> [
>>> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails
>> [
>>> _______________________________________________
>>> Starlink mailing list -- starlink@lists.bufferbloat.net
>>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>>
>>
>> ------------------------------
>>
>> Subject: Digest Footer
>>
>> _______________________________________________
>> Starlink mailing list -- starlink@lists.bufferbloat.net
>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
>>
>>
>> ------------------------------
>>
>> End of Starlink Digest, Vol 53, Issue 14
>> ****************************************
>>
> _______________________________________________
> Starlink mailing list -- starlink@lists.bufferbloat.net
> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net
next prev parent reply other threads:[~2025-09-28 10:47 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <175895289005.1561.17970219906621123011@gauss>
2025-09-27 20:13 ` David P. Reed
2025-09-28 10:47 ` Sebastian Moeller [this message]
2025-09-28 10:59 ` David Lang
[not found] <175900400148.1561.6981645218542924150@gauss>
2025-09-28 17:32 ` David Fernández
2025-09-28 18:51 ` Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/starlink.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2F8F8566-5C5E-4D96-AA48-90313FD33996@gmx.de \
--to=moeller0@gmx.de \
--cc=dpreed@deepplum.com \
--cc=starlink@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox