* [Starlink] Re: Starlink Digest, Vol 53, Issue 14 [not found] <175900400148.1561.6981645218542924150@gauss> @ 2025-09-28 17:32 ` David Fernández 2025-09-28 18:51 ` Sebastian Moeller 0 siblings, 1 reply; 5+ messages in thread From: David Fernández @ 2025-09-28 17:32 UTC (permalink / raw) To: starlink Hi, I understand that SCONE info about maximum possible bandwidth (bottleneck maximum possible bandwidth, not zero, of course) available on the path between a QUIC client and server is not to define a steady state (although it could be), but may be better used to set the size of the initial burst of packets after a connection is established, during the slow start phase, to a value that can be more optimal than just a fixed value like 10 (RFC 6928), which was updated as networks became faster (with more capacity). In any case, if that info can be useful to make any user experience better, of course, is to be demonstrated. QUIC may never replace TCP completely, which is very optimized for bulk data transfers, while QUIC was developed to improve the web browsing experience, making it more interactive. It is a case different from IPv6. QUIC is the transport for HTTP/3, which is now having a quota of 35.8% ( https://w3techs.com/technologies/details/ce-http3) and steadily increasing in the last years (slowly). Most popular websites already use HTTP/3 and browsers support it and it is the preferred option, if available. The advantage of QUIC vs. TCP for web browsing should be an increased interactivity, less RTTs to do connections, more responsiveness of sites, less lag and latency, although it seems people is not really noticing the difference after sites start using HTTP/3 vs. HTTP/2. For example, today, the following site was reported as recently starting using HTTP/3: Pokemon.com (https://w3techs.com/sites/info/pokemon.com) If anybody made a survey to usual Pokemon site visitors, I wonder if they noticed that change. Regards, David > Date: Sat, 27 Sep 2025 16:13:16 -0400 (EDT) > From: "David P. Reed" <dpreed@deepplum.com> > Subject: [Starlink] Re: Starlink Digest, Vol 53, Issue 14 > To: starlink@lists.bufferbloat.net > Cc: starlink@lists.bufferbloat.net > Message-ID: <1759003996.780613387@apps.rackspace.com> > Content-Type: text/plain; charset="UTF-8" > > > > On Saturday, September 27, 2025 02:01, > starlink-request@lists.bufferbloat.net said: > > > > > Date: Fri, 26 Sep 2025 15:11:23 +0200 > > From: Sebastian Moeller <moeller0@gmx.de> > > Subject: [Starlink] Re: Starlink looking less niche as its retail > > presence expands > > To: Michael Richardson <mcr@sandelman.ca> > > Cc: starlink@lists.bufferbloat.net > > Message-ID: <F636F14C-3D51-4D81-A82B-3947554DE2AB@gmx.de> > > Content-Type: text/plain; charset=us-ascii > > > > > > > > > On 25. Sep 2025, at 19:45, Michael Richardson via Starlink > > <starlink@lists.bufferbloat.net> wrote: > > > > > > {resending without signature, since new list can't cope with > attachments > > yet} > > > > > > Luis A. Cornejo via Starlink <starlink@lists.bufferbloat.net> wrote: > > >> Since Starlink controls all the wireless parts of their system. Does > > >> anybody here know what they could do to mitigate the limits of > > >> classical wireless comms, like Shannon-Hartley Capacity Theorem or the > > >> interference? > > > > > > I don't know much about this part. > > > I am kinda hijacking this thread, but I think there is a connection. > > > Dr.Pan gave a talk about Starlink measurements last week in Ottawa. > > > (The time slot was way too short. Very nice talk) > > > > > > I was thinking about the many places where bandwidth can go up and > down, > > both > > > for Starlink's various mis-attachment situations, but also for OneWeb's > > Polar > > > orbit mechanism. (I didn't know it was doing that). > > > And just getting redirected to a different downlink/base-station, and > then > > > have to cross over Starlink's internal network to the same exit point. > > > (Too bad MobileIP never took off) > > > > > > I think the only thing worse than bufferbloat is varying bandwidth > rates. > > > That's because the only way to use that bandwidth is to introduce > bufferbloat > > :-) > > > It was the cablemodem burst mechanism that clued Jim into bufferbloat. > > > > > > So my related question is, if they could mitigate, they likely can't > do it > > > continuously, so things will up/down. The IETF now has a SCONE WG, > with the > > > aim of inserting signals into QUIC traffic about bandwidth available. > > > Yes, meddling by middle boxes. Ick. > > > > Regarding SCONE: > > "The Standard Communication with Network Elements (SCONE) protocol is > > negotiated by QUIC endpoints. This protocol provides a means for > > network elements to signal the maximum available sustained > > throughput, or rate limits, for flows of UDP datagrams that transit > > that network element to a QUIC endpoint." > > Absolutely agree. Trying to have the network elements define a "steady > state" for all entities communicating over a path in the "near future" puts > the control in the wrong place. (Well, many at IETF are "net-heads" who > think they should tell applications what they "need", and view the network > elements as a consortium run by all-powerful communications operators like, > say, the PTTs and the Comcasts). > > The best (most accurate) signal is still a "dropped packet" that results > from queue overflow. And it tells you only that the source sent too much. > > There are no "guarantees" unless you run a central scheduler that takes > all demand known to happen in the next period P (after the current > operation period P-1), and distribute a "fair share" (according to a > committee of network operators) among all users, then not admitting packets > during the period P, except for those allocated. > > In my view this kind of network-control is the opposite of good Internet > design. It's almost as bad as having ChatGPT write design documents - pure > artificial bullshittery. > > There is an end-to-end style approach that suggests that there is no > steady state end-to-end requirement at all - just the most fair sharing of > the achievable "responsiveness" or "low latency" of the end-to-end > applications. > > To achieve that, "throughput" matters very little. Queueing delay is what > matters. > > The current Internet tries to make ALL queues as close to zero in length > as possible. It doesn't succeed, of course - it can't predict what "end > users" will try to start and when. There's no "statistical predictor" of > short-term demand that works (yeah, you all took classes that said "assume > an average load of X", and never asked "why assume that?" I did and the > answer is, that's not reality or close to it). > > So the best answer of SCONE is "the maximum you can expect" has a > denominator of the number of possible flows that can go through that > network element, and a numerator of some fixed number. > > And by Little's Lemma, the latency you will get, if you fully allocate it, > is "infinity seconds". > > > > > > > > That is going nowhere productive... > > a) restricting this to QUIC is fine only if you believe that QUIC will > take over > > all traffic soon (keep in mind what we expected for IPv6) > > b) "signal the maximum available sustained throughput" on a shared > network like > > the internet has a simple true answer "0B"... > > > > IMHO what we will end up doing, after exhaustively attempting and > failing with > > more ambitious schemes like SCONE is tp collect something like the > maximum of > > current capacity use in percent of all nodes along a path and have the > endpoints > > use this to track changes in left over capacity and try to control their > rates > > accordingly. That is better rate-control that is still driven by the > endpoints. > > > > > > > Could Starlink even do this given the lack of L3 processing along the > > > entire link? At least according to Dr. Pan's diagrams. > > > (An L2 hop could well mess with packets too). > > > Ideally, one or more of the satellites involved in the ISL would > > > know what the current bandwidth to a given terminal is, and could > inform the > > > end system. > > > > > > The two questions: > > > 1. are the limits/conditions stable enough for long enough that the > > available > > > bandwidth could be communicated back to the uplink? > > > > "long enough" is a relative term... but sure if the latency/"frequency of > > signalling" is substantially slower than the expected capacity > fluctuations then > > this will not gain much, but if enough of those fluctuations are "slow" > compared > > the RTT of a flow this scheme can have overall beneficial effects. > > > > > > > > 2. assuming, yes, what would the best place to do the SCONE marking? > > > > In our dreams.... in reality instead use the kind of marking that has > already been > > shown to work in the real life... if I sound disillusioned about SCONE > it is > > because I am, we knew even before L4S was ratified, that it is "too > little, too > > late" and again instead of doing the proven thing IETF contemplates > another > > academic proposal. I wonder who has the actual problem of a missing > advisory > > signal that SCONE offers to solve. > > > > > > > >>> Let's recap: Spectrum's boxed in, and power is boxed in. That > > imposes > > >>> a hard limit on total capacity (look up the Shannon-Hartley Capacity > > >>> Theorem if you don't believe me). This capacity is all that Starlink > > >>> has to share among its users in a cell. No matter how many > > satellites > > >>> they launch or how big the rocket. Add more users in a cell, and the > > >>> capacity per user there has to go down. Law of nature. > > > > > > And users will need to know what they have on a minute-by-minute basis > so > > > that they avoid screwing themselves, let alone their neighbours. > > > > That is what feed-back-based rate-control and some modicum of healthy > (transient > > shock-absorber-style) buffering is for, no? > > > > > Packets going up the link, then being dropped, is just wasted. > > > > Sure, if we veridically knew which packts will get dropped, we could > avoid sending > > them in the fist place ;) > > > > > > > > > > ps: > > > I have been watching: > > https://www.youtube.com/playlist?list=PL-_93BVApb58SXL-BCv4rVHL-8GuC2WGb > > > where they have powered up 50+ year old Apollo Transponders. > > > > > > -- > > > ] Never tell me the odds! | ipv6 mesh networks > > [ > > > ] Michael Richardson, Sandelman Software Works | IoT architect > > [ > > > ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails > > [ > > > _______________________________________________ > > > Starlink mailing list -- starlink@lists.bufferbloat.net > > > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net > > > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > Starlink mailing list -- starlink@lists.bufferbloat.net > > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net > > > > > > ------------------------------ > > > > End of Starlink Digest, Vol 53, Issue 14 > > **************************************** > > ^ permalink raw reply [flat|nested] 5+ messages in thread
* [Starlink] Re: Starlink Digest, Vol 53, Issue 14 2025-09-28 17:32 ` [Starlink] Re: Starlink Digest, Vol 53, Issue 14 David Fernández @ 2025-09-28 18:51 ` Sebastian Moeller 0 siblings, 0 replies; 5+ messages in thread From: Sebastian Moeller @ 2025-09-28 18:51 UTC (permalink / raw) To: David Fernández; +Cc: starlink Hi David, let me expand on this tangent a bit... > On 28. Sep 2025, at 19:32, David Fernández via Starlink <starlink@lists.bufferbloat.net> wrote: > > Hi, > > I understand that SCONE info about maximum possible bandwidth (bottleneck > maximum possible bandwidth, not zero, of course) available on the path > between a QUIC client and server is not to define a steady state (although > it could be), but may be better used to set the size of the initial burst > of packets after a connection is established, during the slow start phase, Yeah, that is quite optimistic, that a network node is willing to track individual flows and taylor a specific maxuimum depending on the life cycle of a flow. In theory that might work (and in IETF prose), but in practice I do not see this happen. > to a value that can be more optimal than just a fixed value like 10 (RFC > 6928), which was updated as networks became faster (with more capacity). Sorry, I disagree, if we want less hostile slow-start, we need to give the flow more veridical information about the currently used capacity, not some hypothetical maximum. > > In any case, if that info can be useful to make any user experience better, > of course, is to be demonstrated. I fully agree and am quite puzzled to find zero references in the SCONE draft for any experiments, let alone successful ones that demonstrate the value SCONE might deliver. In my world one starts with such experiments and only writes an internet draft if these experiments demonstrated that an idea has legs to stand on. > > QUIC may never replace TCP completely, which is very optimized for bulk > data transfers, while QUIC was developed to improve the web browsing I disagree... QUIC seems mostly developed to hide as much information from the transport layer/providers as possible. The argument seems to be that it is hard to abuse information one has not available. That sounds not incorrect at first, but given that the ones pushing QUIC are companies like google that make a living out of (ab-)using their users information, and QUIC does pretty much zero to stop the endpoint server from snooping on you, so all it does is keep that information exclusive to google... (depending on your stance on privacy and how intrusive the local government is, I see how this can be reasonably considered a good or a bad thing). > experience, making it more interactive. It is a case different from IPv6. > QUIC is the transport for HTTP/3, which is now having a quota of 35.8% ( > https://w3techs.com/technologies/details/ce-http3) and steadily increasing > in the last years (slowly). Most popular websites already use HTTP/3 and > browsers support it and it is the preferred option, if available. Yeah, but that really means that the main browser maker(s) and the main content providers think QUIC is helping them, see above for one dimension where this can be helpful. What the relative large fraction of QUIC traffic is IMHO NOT is a vote of confidence by the users, as I bet almost nobody actively opted-in to using QUIC. > > The advantage of QUIC vs. TCP for web browsing should be an increased > interactivity, less RTTs to do connections, more responsiveness of sites, Except for shaving off 1 single digit number of RTTs at startup of encrypted sessions, how that? (Not a big fan of QUIC's yet another layer of multiplexing). > less lag and latency, Yeah, not sure I buy this. > although it seems people is not really noticing the > difference after sites start using HTTP/3 vs. HTTP/2. Yepp, that pust the claims about QUIC's superiority for end users somewhat into perspective ;) > > For example, today, the following site was reported as recently starting > using HTTP/3: > Pokemon.com (https://w3techs.com/sites/info/pokemon.com) > > If anybody made a survey to usual Pokemon site visitors, I wonder if they > noticed that change. Good question. > > Regards, > > David > > >> Date: Sat, 27 Sep 2025 16:13:16 -0400 (EDT) >> From: "David P. Reed" <dpreed@deepplum.com> >> Subject: [Starlink] Re: Starlink Digest, Vol 53, Issue 14 >> To: starlink@lists.bufferbloat.net >> Cc: starlink@lists.bufferbloat.net >> Message-ID: <1759003996.780613387@apps.rackspace.com> >> Content-Type: text/plain; charset="UTF-8" >> >> >> >> On Saturday, September 27, 2025 02:01, >> starlink-request@lists.bufferbloat.net said: >> >> >> >>> Date: Fri, 26 Sep 2025 15:11:23 +0200 >>> From: Sebastian Moeller <moeller0@gmx.de> >>> Subject: [Starlink] Re: Starlink looking less niche as its retail >>> presence expands >>> To: Michael Richardson <mcr@sandelman.ca> >>> Cc: starlink@lists.bufferbloat.net >>> Message-ID: <F636F14C-3D51-4D81-A82B-3947554DE2AB@gmx.de> >>> Content-Type: text/plain; charset=us-ascii >>> >>> >>> >>>> On 25. Sep 2025, at 19:45, Michael Richardson via Starlink >>> <starlink@lists.bufferbloat.net> wrote: >>>> >>>> {resending without signature, since new list can't cope with >> attachments >>> yet} >>>> >>>> Luis A. Cornejo via Starlink <starlink@lists.bufferbloat.net> wrote: >>>>> Since Starlink controls all the wireless parts of their system. Does >>>>> anybody here know what they could do to mitigate the limits of >>>>> classical wireless comms, like Shannon-Hartley Capacity Theorem or the >>>>> interference? >>>> >>>> I don't know much about this part. >>>> I am kinda hijacking this thread, but I think there is a connection. >>>> Dr.Pan gave a talk about Starlink measurements last week in Ottawa. >>>> (The time slot was way too short. Very nice talk) >>>> >>>> I was thinking about the many places where bandwidth can go up and >> down, >>> both >>>> for Starlink's various mis-attachment situations, but also for OneWeb's >>> Polar >>>> orbit mechanism. (I didn't know it was doing that). >>>> And just getting redirected to a different downlink/base-station, and >> then >>>> have to cross over Starlink's internal network to the same exit point. >>>> (Too bad MobileIP never took off) >>>> >>>> I think the only thing worse than bufferbloat is varying bandwidth >> rates. >>>> That's because the only way to use that bandwidth is to introduce >> bufferbloat >>> :-) >>>> It was the cablemodem burst mechanism that clued Jim into bufferbloat. >>>> >>>> So my related question is, if they could mitigate, they likely can't >> do it >>>> continuously, so things will up/down. The IETF now has a SCONE WG, >> with the >>>> aim of inserting signals into QUIC traffic about bandwidth available. >>>> Yes, meddling by middle boxes. Ick. >>> >>> Regarding SCONE: >>> "The Standard Communication with Network Elements (SCONE) protocol is >>> negotiated by QUIC endpoints. This protocol provides a means for >>> network elements to signal the maximum available sustained >>> throughput, or rate limits, for flows of UDP datagrams that transit >>> that network element to a QUIC endpoint." >> >> Absolutely agree. Trying to have the network elements define a "steady >> state" for all entities communicating over a path in the "near future" puts >> the control in the wrong place. (Well, many at IETF are "net-heads" who >> think they should tell applications what they "need", and view the network >> elements as a consortium run by all-powerful communications operators like, >> say, the PTTs and the Comcasts). >> >> The best (most accurate) signal is still a "dropped packet" that results >> from queue overflow. And it tells you only that the source sent too much. >> >> There are no "guarantees" unless you run a central scheduler that takes >> all demand known to happen in the next period P (after the current >> operation period P-1), and distribute a "fair share" (according to a >> committee of network operators) among all users, then not admitting packets >> during the period P, except for those allocated. >> >> In my view this kind of network-control is the opposite of good Internet >> design. It's almost as bad as having ChatGPT write design documents - pure >> artificial bullshittery. >> >> There is an end-to-end style approach that suggests that there is no >> steady state end-to-end requirement at all - just the most fair sharing of >> the achievable "responsiveness" or "low latency" of the end-to-end >> applications. >> >> To achieve that, "throughput" matters very little. Queueing delay is what >> matters. >> >> The current Internet tries to make ALL queues as close to zero in length >> as possible. It doesn't succeed, of course - it can't predict what "end >> users" will try to start and when. There's no "statistical predictor" of >> short-term demand that works (yeah, you all took classes that said "assume >> an average load of X", and never asked "why assume that?" I did and the >> answer is, that's not reality or close to it). >> >> So the best answer of SCONE is "the maximum you can expect" has a >> denominator of the number of possible flows that can go through that >> network element, and a numerator of some fixed number. >> >> And by Little's Lemma, the latency you will get, if you fully allocate it, >> is "infinity seconds". >> >> >> >> >>> >>> That is going nowhere productive... >>> a) restricting this to QUIC is fine only if you believe that QUIC will >> take over >>> all traffic soon (keep in mind what we expected for IPv6) >>> b) "signal the maximum available sustained throughput" on a shared >> network like >>> the internet has a simple true answer "0B"... >>> >>> IMHO what we will end up doing, after exhaustively attempting and >> failing with >>> more ambitious schemes like SCONE is tp collect something like the >> maximum of >>> current capacity use in percent of all nodes along a path and have the >> endpoints >>> use this to track changes in left over capacity and try to control their >> rates >>> accordingly. That is better rate-control that is still driven by the >> endpoints. >>> >>> >>>> Could Starlink even do this given the lack of L3 processing along the >>>> entire link? At least according to Dr. Pan's diagrams. >>>> (An L2 hop could well mess with packets too). >>>> Ideally, one or more of the satellites involved in the ISL would >>>> know what the current bandwidth to a given terminal is, and could >> inform the >>>> end system. >>>> >>>> The two questions: >>>> 1. are the limits/conditions stable enough for long enough that the >>> available >>>> bandwidth could be communicated back to the uplink? >>> >>> "long enough" is a relative term... but sure if the latency/"frequency of >>> signalling" is substantially slower than the expected capacity >> fluctuations then >>> this will not gain much, but if enough of those fluctuations are "slow" >> compared >>> the RTT of a flow this scheme can have overall beneficial effects. >>> >>>> >>>> 2. assuming, yes, what would the best place to do the SCONE marking? >>> >>> In our dreams.... in reality instead use the kind of marking that has >> already been >>> shown to work in the real life... if I sound disillusioned about SCONE >> it is >>> because I am, we knew even before L4S was ratified, that it is "too >> little, too >>> late" and again instead of doing the proven thing IETF contemplates >> another >>> academic proposal. I wonder who has the actual problem of a missing >> advisory >>> signal that SCONE offers to solve. >>> >>>> >>>>>> Let's recap: Spectrum's boxed in, and power is boxed in. That >>> imposes >>>>>> a hard limit on total capacity (look up the Shannon-Hartley Capacity >>>>>> Theorem if you don't believe me). This capacity is all that Starlink >>>>>> has to share among its users in a cell. No matter how many >>> satellites >>>>>> they launch or how big the rocket. Add more users in a cell, and the >>>>>> capacity per user there has to go down. Law of nature. >>>> >>>> And users will need to know what they have on a minute-by-minute basis >> so >>>> that they avoid screwing themselves, let alone their neighbours. >>> >>> That is what feed-back-based rate-control and some modicum of healthy >> (transient >>> shock-absorber-style) buffering is for, no? >>> >>>> Packets going up the link, then being dropped, is just wasted. >>> >>> Sure, if we veridically knew which packts will get dropped, we could >> avoid sending >>> them in the fist place ;) >>> >>> >>>> >>>> ps: >>>> I have been watching: >>> https://www.youtube.com/playlist?list=PL-_93BVApb58SXL-BCv4rVHL-8GuC2WGb >>>> where they have powered up 50+ year old Apollo Transponders. >>>> >>>> -- >>>> ] Never tell me the odds! | ipv6 mesh networks >>> [ >>>> ] Michael Richardson, Sandelman Software Works | IoT architect >>> [ >>>> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails >>> [ >>>> _______________________________________________ >>>> Starlink mailing list -- starlink@lists.bufferbloat.net >>>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net >>> >>> >>> ------------------------------ >>> >>> Subject: Digest Footer >>> >>> _______________________________________________ >>> Starlink mailing list -- starlink@lists.bufferbloat.net >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net >>> >>> >>> ------------------------------ >>> >>> End of Starlink Digest, Vol 53, Issue 14 >>> **************************************** >>> > _______________________________________________ > Starlink mailing list -- starlink@lists.bufferbloat.net > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net ^ permalink raw reply [flat|nested] 5+ messages in thread
[parent not found: <175895289005.1561.17970219906621123011@gauss>]
* [Starlink] Re: Starlink Digest, Vol 53, Issue 14 [not found] <175895289005.1561.17970219906621123011@gauss> @ 2025-09-27 20:13 ` David P. Reed 2025-09-28 10:47 ` Sebastian Moeller 0 siblings, 1 reply; 5+ messages in thread From: David P. Reed @ 2025-09-27 20:13 UTC (permalink / raw) To: starlink; +Cc: starlink On Saturday, September 27, 2025 02:01, starlink-request@lists.bufferbloat.net said: > Date: Fri, 26 Sep 2025 15:11:23 +0200 > From: Sebastian Moeller <moeller0@gmx.de> > Subject: [Starlink] Re: Starlink looking less niche as its retail > presence expands > To: Michael Richardson <mcr@sandelman.ca> > Cc: starlink@lists.bufferbloat.net > Message-ID: <F636F14C-3D51-4D81-A82B-3947554DE2AB@gmx.de> > Content-Type: text/plain; charset=us-ascii > > > > > On 25. Sep 2025, at 19:45, Michael Richardson via Starlink > <starlink@lists.bufferbloat.net> wrote: > > > > {resending without signature, since new list can't cope with attachments > yet} > > > > Luis A. Cornejo via Starlink <starlink@lists.bufferbloat.net> wrote: > >> Since Starlink controls all the wireless parts of their system. Does > >> anybody here know what they could do to mitigate the limits of > >> classical wireless comms, like Shannon-Hartley Capacity Theorem or the > >> interference? > > > > I don't know much about this part. > > I am kinda hijacking this thread, but I think there is a connection. > > Dr.Pan gave a talk about Starlink measurements last week in Ottawa. > > (The time slot was way too short. Very nice talk) > > > > I was thinking about the many places where bandwidth can go up and down, > both > > for Starlink's various mis-attachment situations, but also for OneWeb's > Polar > > orbit mechanism. (I didn't know it was doing that). > > And just getting redirected to a different downlink/base-station, and then > > have to cross over Starlink's internal network to the same exit point. > > (Too bad MobileIP never took off) > > > > I think the only thing worse than bufferbloat is varying bandwidth rates. > > That's because the only way to use that bandwidth is to introduce bufferbloat > :-) > > It was the cablemodem burst mechanism that clued Jim into bufferbloat. > > > > So my related question is, if they could mitigate, they likely can't do it > > continuously, so things will up/down. The IETF now has a SCONE WG, with the > > aim of inserting signals into QUIC traffic about bandwidth available. > > Yes, meddling by middle boxes. Ick. > > Regarding SCONE: > "The Standard Communication with Network Elements (SCONE) protocol is > negotiated by QUIC endpoints. This protocol provides a means for > network elements to signal the maximum available sustained > throughput, or rate limits, for flows of UDP datagrams that transit > that network element to a QUIC endpoint." Absolutely agree. Trying to have the network elements define a "steady state" for all entities communicating over a path in the "near future" puts the control in the wrong place. (Well, many at IETF are "net-heads" who think they should tell applications what they "need", and view the network elements as a consortium run by all-powerful communications operators like, say, the PTTs and the Comcasts). The best (most accurate) signal is still a "dropped packet" that results from queue overflow. And it tells you only that the source sent too much. There are no "guarantees" unless you run a central scheduler that takes all demand known to happen in the next period P (after the current operation period P-1), and distribute a "fair share" (according to a committee of network operators) among all users, then not admitting packets during the period P, except for those allocated. In my view this kind of network-control is the opposite of good Internet design. It's almost as bad as having ChatGPT write design documents - pure artificial bullshittery. There is an end-to-end style approach that suggests that there is no steady state end-to-end requirement at all - just the most fair sharing of the achievable "responsiveness" or "low latency" of the end-to-end applications. To achieve that, "throughput" matters very little. Queueing delay is what matters. The current Internet tries to make ALL queues as close to zero in length as possible. It doesn't succeed, of course - it can't predict what "end users" will try to start and when. There's no "statistical predictor" of short-term demand that works (yeah, you all took classes that said "assume an average load of X", and never asked "why assume that?" I did and the answer is, that's not reality or close to it). So the best answer of SCONE is "the maximum you can expect" has a denominator of the number of possible flows that can go through that network element, and a numerator of some fixed number. And by Little's Lemma, the latency you will get, if you fully allocate it, is "infinity seconds". > > That is going nowhere productive... > a) restricting this to QUIC is fine only if you believe that QUIC will take over > all traffic soon (keep in mind what we expected for IPv6) > b) "signal the maximum available sustained throughput" on a shared network like > the internet has a simple true answer "0B"... > > IMHO what we will end up doing, after exhaustively attempting and failing with > more ambitious schemes like SCONE is tp collect something like the maximum of > current capacity use in percent of all nodes along a path and have the endpoints > use this to track changes in left over capacity and try to control their rates > accordingly. That is better rate-control that is still driven by the endpoints. > > > > Could Starlink even do this given the lack of L3 processing along the > > entire link? At least according to Dr. Pan's diagrams. > > (An L2 hop could well mess with packets too). > > Ideally, one or more of the satellites involved in the ISL would > > know what the current bandwidth to a given terminal is, and could inform the > > end system. > > > > The two questions: > > 1. are the limits/conditions stable enough for long enough that the > available > > bandwidth could be communicated back to the uplink? > > "long enough" is a relative term... but sure if the latency/"frequency of > signalling" is substantially slower than the expected capacity fluctuations then > this will not gain much, but if enough of those fluctuations are "slow" compared > the RTT of a flow this scheme can have overall beneficial effects. > > > > > 2. assuming, yes, what would the best place to do the SCONE marking? > > In our dreams.... in reality instead use the kind of marking that has already been > shown to work in the real life... if I sound disillusioned about SCONE it is > because I am, we knew even before L4S was ratified, that it is "too little, too > late" and again instead of doing the proven thing IETF contemplates another > academic proposal. I wonder who has the actual problem of a missing advisory > signal that SCONE offers to solve. > > > > >>> Let's recap: Spectrum's boxed in, and power is boxed in. That > imposes > >>> a hard limit on total capacity (look up the Shannon-Hartley Capacity > >>> Theorem if you don't believe me). This capacity is all that Starlink > >>> has to share among its users in a cell. No matter how many > satellites > >>> they launch or how big the rocket. Add more users in a cell, and the > >>> capacity per user there has to go down. Law of nature. > > > > And users will need to know what they have on a minute-by-minute basis so > > that they avoid screwing themselves, let alone their neighbours. > > That is what feed-back-based rate-control and some modicum of healthy (transient > shock-absorber-style) buffering is for, no? > > > Packets going up the link, then being dropped, is just wasted. > > Sure, if we veridically knew which packts will get dropped, we could avoid sending > them in the fist place ;) > > > > > > ps: > > I have been watching: > https://www.youtube.com/playlist?list=PL-_93BVApb58SXL-BCv4rVHL-8GuC2WGb > > where they have powered up 50+ year old Apollo Transponders. > > > > -- > > ] Never tell me the odds! | ipv6 mesh networks > [ > > ] Michael Richardson, Sandelman Software Works | IoT architect > [ > > ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails > [ > > _______________________________________________ > > Starlink mailing list -- starlink@lists.bufferbloat.net > > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Starlink mailing list -- starlink@lists.bufferbloat.net > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net > > > ------------------------------ > > End of Starlink Digest, Vol 53, Issue 14 > **************************************** > ^ permalink raw reply [flat|nested] 5+ messages in thread
* [Starlink] Re: Starlink Digest, Vol 53, Issue 14 2025-09-27 20:13 ` David P. Reed @ 2025-09-28 10:47 ` Sebastian Moeller 2025-09-28 10:59 ` David Lang 0 siblings, 1 reply; 5+ messages in thread From: Sebastian Moeller @ 2025-09-28 10:47 UTC (permalink / raw) To: David P. Reed; +Cc: starlink Hi David, > On 27. Sep 2025, at 22:13, David P. Reed via Starlink <starlink@lists.bufferbloat.net> wrote: > > > > On Saturday, September 27, 2025 02:01, starlink-request@lists.bufferbloat.net said: > > > >> Date: Fri, 26 Sep 2025 15:11:23 +0200 >> From: Sebastian Moeller <moeller0@gmx.de> >> Subject: [Starlink] Re: Starlink looking less niche as its retail >> presence expands >> To: Michael Richardson <mcr@sandelman.ca> >> Cc: starlink@lists.bufferbloat.net >> Message-ID: <F636F14C-3D51-4D81-A82B-3947554DE2AB@gmx.de> >> Content-Type: text/plain; charset=us-ascii >> >> >> >>> On 25. Sep 2025, at 19:45, Michael Richardson via Starlink >> <starlink@lists.bufferbloat.net> wrote: >>> >>> {resending without signature, since new list can't cope with attachments >> yet} >>> >>> Luis A. Cornejo via Starlink <starlink@lists.bufferbloat.net> wrote: >>>> Since Starlink controls all the wireless parts of their system. Does >>>> anybody here know what they could do to mitigate the limits of >>>> classical wireless comms, like Shannon-Hartley Capacity Theorem or the >>>> interference? >>> >>> I don't know much about this part. >>> I am kinda hijacking this thread, but I think there is a connection. >>> Dr.Pan gave a talk about Starlink measurements last week in Ottawa. >>> (The time slot was way too short. Very nice talk) >>> >>> I was thinking about the many places where bandwidth can go up and down, >> both >>> for Starlink's various mis-attachment situations, but also for OneWeb's >> Polar >>> orbit mechanism. (I didn't know it was doing that). >>> And just getting redirected to a different downlink/base-station, and then >>> have to cross over Starlink's internal network to the same exit point. >>> (Too bad MobileIP never took off) >>> >>> I think the only thing worse than bufferbloat is varying bandwidth rates. >>> That's because the only way to use that bandwidth is to introduce bufferbloat >> :-) >>> It was the cablemodem burst mechanism that clued Jim into bufferbloat. >>> >>> So my related question is, if they could mitigate, they likely can't do it >>> continuously, so things will up/down. The IETF now has a SCONE WG, with the >>> aim of inserting signals into QUIC traffic about bandwidth available. >>> Yes, meddling by middle boxes. Ick. >> >> Regarding SCONE: >> "The Standard Communication with Network Elements (SCONE) protocol is >> negotiated by QUIC endpoints. This protocol provides a means for >> network elements to signal the maximum available sustained >> throughput, or rate limits, for flows of UDP datagrams that transit >> that network element to a QUIC endpoint." > > Absolutely agree. Trying to have the network elements define a "steady state" for all entities communicating over a path in the "near future" puts the control in the wrong place. (Well, many at IETF are "net-heads" who think they should tell applications what they "need", and view the network elements as a consortium run by all-powerful communications operators like, say, the PTTs and the Comcasts). > > The best (most accurate) signal is still a "dropped packet" that results from queue overflow. And it tells you only that the source sent too much. I do have some sympathy of the network giving some help to the end-nodes detecting the "sent too much" state faster than drop. But if push comes to shove that drop is essentially the only "reliable" signal... > > There are no "guarantees" unless you run a central scheduler that takes all demand known to happen in the next period P (after the current operation period P-1), and distribute a "fair share" (according to a committee of network operators) among all users, then not admitting packets during the period P, except for those allocated. In this long enough to at least have an opinion ;) : IMHO there are three options for "fairness": a) instructed fairness, where there needs to be some veridical (preferably out of band) way to signal desired capacity share of different aggregates (and how to detect these aggregates) b) emergent fairness, where the network uses some unambiguous aggregate identifier (e.g. 2-tupel, 4-tupel. 5-tubel from the IP header) and foregoes the need for a reliable timely side-channel to signal relative importance of packets/flows c) do not bother at all Status quo is IMHO c), b) has been shown to be both achievable (at least at leaf networks) and to offer a noticeable improvement over c) for many use-cases, but everybody and their dog obsesses about a) in spite of this in practice only ever working in well controlled niches... > > In my view this kind of network-control is the opposite of good Internet design. It's almost as bad as having ChatGPT write design documents - pure artificial bullshittery. ;) > > There is an end-to-end style approach that suggests that there is no steady state end-to-end requirement at all - just the most fair sharing of the achievable "responsiveness" or "low latency" of the end-to-end applications. +1 > > To achieve that, "throughput" matters very little. Queueing delay is what matters. +1 > > The current Internet tries to make ALL queues as close to zero in length as possible. It doesn't succeed, of course - it can't predict what "end users" will try to start and when. With an oracle scheduler all problems would go away :) > There's no "statistical predictor" of short-term demand that works (yeah, you all took classes that said "assume an average load of X", and never asked "why assume that?" I did and the answer is, that's not reality or close to it). > > So the best answer of SCONE is "the maximum you can expect" has a denominator of the number of possible flows that can go through that network element, and a numerator of some fixed number. > > And by Little's Lemma, the latency you will get, if you fully allocate it, is "infinity seconds". BTW, I do see some use of SCONE's maximum the path will ever sustain information, but that is so extremely niche, that the IETF should not bother... In cake-autorate (and similar approaches like purple-aimd https://conference.apnic.net/60/assets/presentation-files/f6110bfa-918f-430a-95d6-60a4524d62cf.pdf) were we try to adapt a traffic shaper to path capacity* having that information available would offer a small optimisation avoiding trying to increase the shaper above that value... however we would need that signal not per flow and we would need to be able to trust it. And there it essentially stops, unless an ISP is using that signal for a load-bearing part of its own infrastructure there is little reason this will ever be trustworthy over the open internet... it might work for internal use-cases within corporate networks.... *) Running full steam ahead into a variant " no "statistical predictor" of short-term ..." problem you describes, aiming not for a theoretically perfect solution, but just for something better than doing nothing at all. Regards Sebastian > > > > >> >> That is going nowhere productive... >> a) restricting this to QUIC is fine only if you believe that QUIC will take over >> all traffic soon (keep in mind what we expected for IPv6) >> b) "signal the maximum available sustained throughput" on a shared network like >> the internet has a simple true answer "0B"... >> >> IMHO what we will end up doing, after exhaustively attempting and failing with >> more ambitious schemes like SCONE is tp collect something like the maximum of >> current capacity use in percent of all nodes along a path and have the endpoints >> use this to track changes in left over capacity and try to control their rates >> accordingly. That is better rate-control that is still driven by the endpoints. >> >> >>> Could Starlink even do this given the lack of L3 processing along the >>> entire link? At least according to Dr. Pan's diagrams. >>> (An L2 hop could well mess with packets too). >>> Ideally, one or more of the satellites involved in the ISL would >>> know what the current bandwidth to a given terminal is, and could inform the >>> end system. >>> >>> The two questions: >>> 1. are the limits/conditions stable enough for long enough that the >> available >>> bandwidth could be communicated back to the uplink? >> >> "long enough" is a relative term... but sure if the latency/"frequency of >> signalling" is substantially slower than the expected capacity fluctuations then >> this will not gain much, but if enough of those fluctuations are "slow" compared >> the RTT of a flow this scheme can have overall beneficial effects. >> >>> >>> 2. assuming, yes, what would the best place to do the SCONE marking? >> >> In our dreams.... in reality instead use the kind of marking that has already been >> shown to work in the real life... if I sound disillusioned about SCONE it is >> because I am, we knew even before L4S was ratified, that it is "too little, too >> late" and again instead of doing the proven thing IETF contemplates another >> academic proposal. I wonder who has the actual problem of a missing advisory >> signal that SCONE offers to solve. >> >>> >>>>> Let's recap: Spectrum's boxed in, and power is boxed in. That >> imposes >>>>> a hard limit on total capacity (look up the Shannon-Hartley Capacity >>>>> Theorem if you don't believe me). This capacity is all that Starlink >>>>> has to share among its users in a cell. No matter how many >> satellites >>>>> they launch or how big the rocket. Add more users in a cell, and the >>>>> capacity per user there has to go down. Law of nature. >>> >>> And users will need to know what they have on a minute-by-minute basis so >>> that they avoid screwing themselves, let alone their neighbours. >> >> That is what feed-back-based rate-control and some modicum of healthy (transient >> shock-absorber-style) buffering is for, no? >> >>> Packets going up the link, then being dropped, is just wasted. >> >> Sure, if we veridically knew which packts will get dropped, we could avoid sending >> them in the fist place ;) >> >> >>> >>> ps: >>> I have been watching: >> https://www.youtube.com/playlist?list=PL-_93BVApb58SXL-BCv4rVHL-8GuC2WGb >>> where they have powered up 50+ year old Apollo Transponders. >>> >>> -- >>> ] Never tell me the odds! | ipv6 mesh networks >> [ >>> ] Michael Richardson, Sandelman Software Works | IoT architect >> [ >>> ] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails >> [ >>> _______________________________________________ >>> Starlink mailing list -- starlink@lists.bufferbloat.net >>> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net >> >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> Starlink mailing list -- starlink@lists.bufferbloat.net >> To unsubscribe send an email to starlink-leave@lists.bufferbloat.net >> >> >> ------------------------------ >> >> End of Starlink Digest, Vol 53, Issue 14 >> **************************************** >> > _______________________________________________ > Starlink mailing list -- starlink@lists.bufferbloat.net > To unsubscribe send an email to starlink-leave@lists.bufferbloat.net ^ permalink raw reply [flat|nested] 5+ messages in thread
* [Starlink] Re: Starlink Digest, Vol 53, Issue 14 2025-09-28 10:47 ` Sebastian Moeller @ 2025-09-28 10:59 ` David Lang 0 siblings, 0 replies; 5+ messages in thread From: David Lang @ 2025-09-28 10:59 UTC (permalink / raw) To: Sebastian Moeller; +Cc: David P. Reed, starlink Sebastian Moeller wrote: >> There are no "guarantees" unless you run a central scheduler that takes all demand known to happen in the next period P (after the current operation period P-1), and distribute a "fair share" (according to a committee of network operators) among all users, then not admitting packets during the period P, except for those allocated. > > In this long enough to at least have an opinion ;) : IMHO there are three options for "fairness": > a) instructed fairness, where there needs to be some veridical (preferably out of band) way to signal desired capacity share of different aggregates (and how to detect these aggregates) > b) emergent fairness, where the network uses some unambiguous aggregate identifier (e.g. 2-tupel, 4-tupel. 5-tubel from the IP header) and foregoes the need for a reliable timely side-channel to signal relative importance of packets/flows > c) do not bother at all > > Status quo is IMHO c), b) has been shown to be both achievable (at least at leaf networks) and to offer a noticeable improvement over c) for many use-cases, but everybody and their dog obsesses about a) in spite of this in practice only ever working in well controlled niches... The other problem to deal with is how your new system is going to do in an environement where not everyone uses it. (akd, the current Internet) and how it deals with a small number of nodes trying to get more than their 'fair share' David Lang ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-09-28 18:52 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <175900400148.1561.6981645218542924150@gauss> 2025-09-28 17:32 ` [Starlink] Re: Starlink Digest, Vol 53, Issue 14 David Fernández 2025-09-28 18:51 ` Sebastian Moeller [not found] <175895289005.1561.17970219906621123011@gauss> 2025-09-27 20:13 ` David P. Reed 2025-09-28 10:47 ` Sebastian Moeller 2025-09-28 10:59 ` David Lang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox