<font face="arial" size="2"><p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Upon thinking about this, here's a radical idea:</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">the expected time until a bottleneck link clears, that is, 0 packets are in the queue to be sent on it, must be < t, where t is an Internet-wide constant corresponding to the time it takes light to circle the earth.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">This is a local constraint, one that is required of a router. It can be achieved in any of a variety of ways (for example choosing to route different flows on different paths that don't include the bottleneck link).</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">It need not be true at all times - but when I say "expected time", I mean that the queue's behavior is monitored so that this situation is quite rare over any interval of ten minutes or more.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">If a bottleneck link is continuously full for more than the time it takes for packets on a fiber (< light speed) to circle the earth, it is in REALLY bad shape. That must never happen.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Why is this important?</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">It's a matter of control theory - if the control loop delay gets longer than its minimum, instability tends to take over no matter what control discipline is used to manage the system.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Now, it is important as hell to avoid bullshit research programs that try to "optimize" ustilization of link capacity at 100%. Those research programs focus on the absolute wrong measure - a proxy for "network capital cost" that is in fact the wrong measure of any real network operator's cost structure. The cost of media (wires, airtime, ...) is a tiny fraction of most network operations' cost in any real business or institution. We don't optimize highways by maximizing the number of cars on every stretch of highway, for obvious reasons, but also for non-obvious reasons.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Latency and lack of flexibiilty or reconfigurability impose real costs on a system that are far more significant to end-user value than the cost of the media.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">A sustained congestion of a bottleneck link is not a feature, but a very serious operational engineering error. People should be fired if they don't prevent that from ever happening, or allow it to persist.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">This is why telcos, for example, design networks to handle the expected maximum traffic with some excess apactity. This is why networks are constantly being upgraded as load increases, *before* overloads occur.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">It's an incredibly dangerous and arrogant assumption that operation in a congested mode is acceptable.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">That's the rationale for the "radical proposal".</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Sadly, academic thinkers (even ones who have worked in industry research labs on minor aspects) get drawn into solving the wrong problem - optimizing the case that should never happen.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Sure that's helpful - but only in the same sense that when designing systems where accidents need to have fallbacks one needs to design the fallback system to work.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">Operating at fully congested state - or designing TCP to essencially come close to DDoS behavior on a bottleneck to get a publishable paper - is missing the point.</p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;"> </p>
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">On Monday, September 27, 2021 10:50am, "Bob Briscoe" <research@bobbriscoe.net> said:<br /><br /></p>
<div id="SafeStyles1632865574">
<p style="margin:0;padding:0;font-family: arial; font-size: 10pt; overflow-wrap: break-word;">> Dave,<br />> <br />> On 26/09/2021 21:08, Dave Taht wrote:<br />> > ... an exploration of smaller mss sizes in response to persistent congestion<br />> ><br />> > This is in response to two declarative statements in here that I've<br />> > long disagreed with,<br />> > involving NOT shrinking the mss, and not trying to do pacing...<br />> <br />> I would still avoid shrinking the MSS, 'cos you don't know if the<br />> congestion constraint is the CPU, in which case you'll make congestion<br />> worse. But we'll have to differ on that if you disagree.<br />> <br />> I don't think that paper said don't do pacing. In fact, it says "...pace<br />> the segments at less than one per round trip..."<br />> <br />> Whatever, that paper was the problem statement, with just some ideas on<br />> how we were going to solve it.<br />> after that, Asad (added to the distro) did his whole Masters thesis on<br />> this - I suggest you look at his thesis and code (pointers below).<br />> <br />> Also soon after he'd finished, changes to BBRv2 were introduced to<br />> reduce queuing delay with large numbers of flows. You might want to take<br />> a look at that too:<br />> https://datatracker.ietf.org/meeting/106/materials/slides-106-iccrg-update-on-bbrv2#page=10<br />> <br />> ><br />> > https://www.bobbriscoe.net/projects/latency/sub-mss-w.pdf<br />> ><br />> > OTherwise, for a change, I largely agree with bob.<br />> ><br />> > "No amount of AQM twiddling can fix this. The solution has to fix TCP."<br />> ><br />> > "nearly all TCP implementations cannot operate at less than two packets per<br />> RTT"<br />> <br />> Back to Asad's Master's thesis, we found that just pacing out the<br />> packets wasn't enough. There's a very brief summary of the 4 things we<br />> found we had to do in 4 bullets in this section of our write-up for netdev:<br />> https://bobbriscoe.net/projects/latency/tcp-prague-netdev0x13.pdf#subsubsection.3.1.6<br />> And I've highlighted a couple of unexpected things that cropped up below.<br />> <br />> Asad's full thesis:<br />> <br />> Ahmed, A., "Extending TCP for Low Round Trip Delay",<br />> <br />> Masters Thesis, Uni Oslo , August 2019,<br />> <br />> <https://www.duo.uio.no/handle/10852/70966>.<br />> Asad's thesis presentation:<br />> https://bobbriscoe.net/presents/1909submss/present_asadsa.pdf<br />> <br />> Code:<br />> https://bitbucket.org/asadsa/kernel420/src/submss/<br />> Despite significant changes to basic TCP design principles, the diffs<br />> were not that great.<br />> <br />> A number of tricky problems came up.<br />> <br />> * For instance, simple pacing when <1 ACK per RTT wasn't that simple.<br />> Whenever there were bursts from cross-traffic, the consequent burst in<br />> your own flow kept repeating in subsequent rounds. We realized this was<br />> because you never have a real ACK clock (you always set the next send<br />> time based on previous send times). So we set up the the next send time<br />> but then re-adjusted it if/when the next ACK did actually arrive.<br />> <br />> * The additive increase of one segment was the other main problem. When<br />> you have such a small window, multiplicative decrease scales fine, but<br />> an additive increase of 1 segment is a huge jump in comparison, when<br />> cwnd is a fraction of a segment. "Logarithmically scaled additive<br />> increase" was our solution to that (basically, every time you set<br />> ssthresh, alter the additive increase constant using a formula that<br />> scales logarithmically with ssthresh, so it's still roughly 1 for the<br />> current Internet scale).<br />> <br />> What became of Asad's work?<br />> Altho the code finally worked pretty well {1}, we decided not to pursue<br />> it further 'cos a minimum cwnd actually gives a trickle of throughput<br />> protection against unresponsive flows (with the downside that it<br />> increases queuing delay). That's not to say this isn't worth working on<br />> further, but there was more to do to make it bullet proof, and we were<br />> in two minds how important it was, so it worked its way down our<br />> priority list.<br />> <br />> {Note 1: From memory, there was an outstanding problem with one flow<br />> remaining dominant if you had step-ECN marking, which we worked out was<br />> due to the logarithmically scaled additive increase, but we didn't work<br />> on it further to fix it.}<br />> <br />> <br />> <br />> Bob<br />> <br />> <br />> --<br />> ________________________________________________________________<br />> Bob Briscoe http://bobbriscoe.net/<br />> <br />> _______________________________________________<br />> Ecn-sane mailing list<br />> Ecn-sane@lists.bufferbloat.net<br />> https://lists.bufferbloat.net/listinfo/ecn-sane<br />> </p>
</div></font>