<div dir="ltr"><div dir="ltr">
<div dir="ltr"><div>This discussion is fascinating and made me think of a couple of points I really wish more people would grok:</div><div><br></div><div>1. What matters for the amount of queuing is the ratio of load over capacity, or demand/supply, if you like. This ratio, at any point in time, determines how quickly a queue fills or empties. It is the derivative of the queue depth, if you like. Drops in capacity are equivalent to spikes in load from this point of view.<br></div><div><br></div><div>This means the rate adaptation of WiFi and LTE, and link changes in the Starlink network, has far greater potential of causing latency spikes than TCP, even when many users connect at the same time. WiFi rates can go from 1000 to 1 from one packet to the next, and whenever that happens there simply isn't time for TCP or any other end-to-end congestion controller to react. In the presence of capacity seeking traffic there will, inevitably, be a latency spike (or packet loss) when link capacity drops.<br></div><div><br></div><div>I'm presenting a paper on this at ICC next week, and the preprint is here: <a href="https://arxiv.org/abs/2111.00488">https://arxiv.org/abs/2111.00488</a></div><div><br></div><div>2. IF you can describe how the ratio of demand to supply (or load/capacity) changes over time (i.e, how much and how quickly it can change), then we can use queuing theory (and/or simulations), to work out the utilization vs. queuing delay trade-off, including transient behaviour. Handling transients is what FQ excels at.<br></div><div><br></div><div>Because of the need for frequent link changes in the Starlink network, there will be a need for more buffering than your typical (relatively) static network. Not only because the load changes quickly, but because the capacity does as well. This causes rapid changes in the load-to-capacity-ratio, which will cause queues and/or packet loss unless it's planned <i>really</i> well. I'm not going to say that is impossible, but it's certainly hard. <br></div><div><br></div><div>Some queuing and deliberate under-utilization is needed to achieve reliable QoE in a system like that.<br></div></div><div><br></div><div>Just my two cents!</div><div><br></div><div>Cheers,</div><div>Bjørn Ivar Teigen</div>
</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 13 May 2023 at 12:10, Ulrich Speidel via Starlink <<a href="mailto:starlink@lists.bufferbloat.net">starlink@lists.bufferbloat.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Here's a bit of a question to you all. See what you make of it.<br>
<br>
I've been thinking a bit about the latencies we see in the Starlink <br>
network. This is why this list exist (right, Dave?). So what do we know?<br>
<br>
1) We know that RTTs can be in the 100's of ms even in what appear to be <br>
bent-pipe scenarios where the physical one-way path should be well under <br>
3000 km, with physical RTT under 20 ms.<br>
2) We know from plenty of traceroutes that these RTTs accrue in the <br>
Starlink network, not between the Starlink handover point (POP) to the <br>
Internet.<br>
3) We know that they aren't an artifact of the Starlink WiFi router (our <br>
traceroutes were done through their Ethernet adaptor, which bypasses the <br>
router), so they must be delays on the satellites or the teleports.<br>
4) We know that processing delay isn't a huge factor because we also see <br>
RTTs well under 30 ms.<br>
5) That leaves queuing delays.<br>
<br>
This issue has been known for a while now. Starlink have been innovating <br>
their heart out around pretty much everything here - and yet, this <br>
bufferbloat issue hasn't changed, despite Dave proposing what appears to <br>
be an easy fix compared to a lot of other things they have done. So what <br>
are we possibly missing here?<br>
<br>
Going back to first principles: The purpose of a buffer on a network <br>
device is to act as a shock absorber against sudden traffic bursts. If I <br>
want to size that buffer correctly, I need to know at the very least <br>
(paraphrasing queueing theory here) something about my packet arrival <br>
process.<br>
<br>
If I look at conventional routers, then that arrival process involves <br>
traffic generated by a user population that changes relatively slowly: <br>
WiFi users come and go. One at a time. Computers in a company get turned <br>
on and off and rebooted, but there are no instantaneous jumps in load - <br>
you don't suddenly have a hundred users in the middle of watching <br>
Netflix turning up that weren't there a second ago. Most of what we know <br>
about Internet traffic behaviour is based on this sort of network, and <br>
this is what we've designed our queuing systems around, right?<br>
<br>
Observation: Starlink potentially breaks that paradigm. Why? Imagine a <br>
satellite X handling N users that are located closely together in a <br>
fibre-less rural town watching a range of movies. Assume that N is <br>
relatively large. Say these users are currently handled through ground <br>
station teleport A some distance away to the west (bent pipe with <br>
switching or basic routing on the satellite). X is in view of both A and <br>
the N users, but with X being a LEO satellite, that bliss doesn't last. <br>
Say X is moving to the (south- or north-)east and out of A's range. <br>
Before connection is lost, the N users migrate simultaneously to a new <br>
satellite Y that has moved into view of both A and themselves. Y is <br>
doing so from the west and is also catering to whatever users it can see <br>
there, and let's suppose has been using A for a while already.<br>
<br>
The point is that the user load on X and Y from users other than our N <br>
friends could be quite different. E.g., one of them could be over the <br>
ocean with few users, the other over countryside with a lot of <br>
customers. The TCP stacks of our N friends are (hopefully) somewhat <br>
adapted to the congestion situation on X with their cwnds open to <br>
reasonable sizes, but they are now thrown onto a completely different <br>
congestion scenario on Y. Similarly, say that Y had less than N users <br>
before the handover. For existing users on Y, there is now a huge surge <br>
of competing traffic that wasn't there a second ago - surging far faster <br>
than we would expect this to happen in a conventional network because <br>
there is no slow start involved.<br>
<br>
This seems to explain the huge jumps you see on Starlink in TCP goodput <br>
over time.<br>
<br>
But could this be throwing a few spanners into the works in terms of <br>
queuing? Does it invalidate what we know about queues and queue <br>
management? Would surges like these justify larger buffers?<br>
<br>
-- <br>
****************************************************************<br>
Dr. Ulrich Speidel<br>
<br>
School of Computer Science<br>
<br>
Room 303S.594 (City Campus)<br>
<br>
The University of Auckland<br>
<a href="mailto:u.speidel@auckland.ac.nz" target="_blank">u.speidel@auckland.ac.nz</a><br>
<a href="http://www.cs.auckland.ac.nz/~ulrich/" rel="noreferrer" target="_blank">http://www.cs.auckland.ac.nz/~ulrich/</a><br>
****************************************************************<br>
<br>
<br>
<br>
_______________________________________________<br>
Starlink mailing list<br>
<a href="mailto:Starlink@lists.bufferbloat.net" target="_blank">Starlink@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/starlink" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/starlink</a><br>
</blockquote></div><br clear="all"><br><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><span style="background-color:rgb(255,255,255)"><span style="color:rgb(0,0,0)"><font size="2">Bjørn Ivar Teigen, Ph.D.</font></span></span></div><div><span style="background-color:rgb(255,255,255)"><span style="color:rgb(0,0,0)"><font size="2">Head of Research<br></font></span></span></div><span style="background-color:rgb(255,255,255)"><span style="color:rgb(0,0,0)"><font size="2"><span style="font-family:arial,sans-serif"><span style="font-style:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;display:inline;float:none">+47 47335952 |<span> </span></span><a href="mailto:bjorn@domos.ai" rel="noopener noreferrer" style="text-decoration:none;font-style:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px" target="_blank">bjorn@domos.ai</a><span style="font-style:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;display:inline;float:none"><span> </span>|<span> </span></span><a href="http://www.domos.ai" rel="noopener noreferrer" style="text-decoration:none;font-style:normal;font-weight:400;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px" target="_blank">www.domos.ai</a></span></font></span></span></div></div></div></div></div></div>