* [Starlink] Re: Starlink Digest, Vol 53, Issue 19
[not found] <175913283822.1561.14797222770063671010@gauss>
@ 2025-09-29 16:48 ` David P. Reed
0 siblings, 0 replies; only message in thread
From: David P. Reed @ 2025-09-29 16:48 UTC (permalink / raw)
To: starlink; +Cc: starlink
Big insights from this discussion below.
On Monday, September 29, 2025 04:00, starlink-request@lists.bufferbloat.net said:
> From: Sebastian Moeller <moeller0@gmx.de>
> Subject: [Starlink] Re: SCONE, bandwidth indeterminacy and network to
> host signals (was Re: Re: Starlink Digest, Vol 53, Issue 14)
...
> What these proposals/experiments/solutions have in common
IETF no longer cares about experiments or working code. It's clearly a set of people who have solutions looking for problems who love to talk, especially past each other. Sebastian, your point here suggests that SCONE is so detached from real engineering exploration that it seems a waste of time. I agree.
> Here is the kicker, if you actually track the changes in bottleneck capacity and
> compare these against your own rate or intended rate increases you will be able to
> actually extrapolate when you reach that capacity and hence can do a better job of
> not over-committing too much date into the network.
The SCONE project seems to be driven by some vague anxiety about "slow start", but has expanded beyond that because there's no experiments or working code to discuss. (and "slow start" was a hack to start with, based on a belief that ALL meaningful use of the Internet will forever be sustained capacity limited mass transfers, and no other behavior matters).
IMHO, as someone who designs communication applications and operating systems, and uses them, this is completely wrong-headed. It's a "net-head" view - the idea that the "networks" should decide what users do and want.
What do browsers do? They typically respond to a click or user event by using the network resources they have to assemble all the constituent assets needed to create a new "frame" for the user to interact with. This can involve hundreds of remote requests and responses, driven by Javascript programs downloaded into the browser to execute. And a fair number of "tabs" and windows may be concurrently responding to other local events, also to respond to changes. "Push" information from active connections are delivered over "mostly idle" (zero bitrate) connections, but require very fast response from the browser.
I search in VAIN to find any relationship between the SCONE project (which focuses on sharing capacity as throughput) and making browser UI experience better.
Now one of the phenomena that browsers create is highly correlated real-time interactions on different paths that often share a common router and outbound link. SCONE seems to believe this is not relevant at all. That's partly because OS's like Windows/Mac/Linux cannot reflect this interaction to the browser, which treats every source/dest pair as distinct.
Finally, routing doesn't take advantage of what could be "spreading" the burst of communication during a a browser event (click, push response, ...) across a diversity of paths. Instead, SCONE seems to freeze into a model that assumes no such diversity exists, because the focus is on measuring the "bottleneck".
This is reality today, in browser-land. And then there is WebRTC... which uses datagrams peer-to-peer, again highly correlated.
Yet, IETF doesn't look at reality here. It just meets and writes drafts that assume that all communications are like TCP - "sessions" that are believed to have unlimited throughput needs, and which the participants in committees like SCONE seem to talk past each other about.
Now QUIC isn't great - the designers seem to imagine that the problems with TCP are handshakes, so throw them out. But there is no "control theory" in QUIC. Thus there is no "congestion management" possible at the endpoints that can cope with the fact that Internet is multiplexed among highly bursty, unpredictable users. QUIC seems to be arriving, though, without any serious experimentation about its scalability.
QUIC doesn't do "slow start" at all, and each datagram's role in end-to-end browser latency (click to page-view) varies because it's highly data content dependent. (Predicting a collection of JavaScript code's QUIC datagram needs would need a high-grade oracle running that JavaScript code somewhere in the cloud).
This is why I'm very sympathetic with Sebastian's view.
The network's job is to "get out of the way", by avoiding queueing delay at all costs. That's the relevant mettric. And actually, packet drops are far from bad, they are the fundamental predictor of growth of queueing delay.
^ permalink raw reply [flat|nested] only message in thread