[Cerowrt-devel] Ubiquiti QOS
dpreed at reed.com
dpreed at reed.com
Sun May 25 21:49:00 PDT 2014
Len Kleinrock and his student proved that the "optimal" state for throughput in the internet is the 1-2 buffer case. It's easy to think this through...
A simple intuition is that each node that qualifies as a "bottleneck" (meaning that the input rate exceeds the service rate of the outbound queue) will work optimally if it is in a double buffering state - that is a complete packet comes in for the outbound link during the time that the current packet goes out.
That's topology independent. It's equivalent to saying that the number of packets in flight along a path in an optimal state between two endpoints is equal to the path's share of the bottleneck link's capacity times the physical minimum RTT for the MTU packet - the amount of "pipelining" that can be achieved along that path.
Having more buffering can only make the throughput lower or at best the same. In other words, you might get the same throughput with more buffering, but more likely the extra buffering will make things worse. (the rare special case is the "hot rod" scenario of maximum end-to-end throughput with no competing flows.)
The real congestion management issue, which I described, is the unnecessary "bunching" of packets in a flow. The bunching can be ameliorated at the source endpoint (or controlled by the receive endpoint transmitting an ack only when it receives a packet in the optimal state, but immediately responding to it to increase the responsiveness of the control loop: analogous to "impedance matching" in complex networks of transmission lines - bunching analogously corresponds to standing waves that reduce power transfer when impedance is not matched approximately. The maximum power transfer does not happen if some intermediate point includes a bad impedance match, storing energy that interferes with future energy transfer).
Bunching has many causes, but it's better to remove it at the entry to the network than to allow it to clog up latency of competing flows.
I'm deliberately not using queueing theory descriptions, because the queueing theory and control theory associated with networks that can drop packets and have finite buffering with end-to-end feedback congestion control is quite complex, especially for non-Poisson traffic - far beyond what is taught in elementary queueing theory.
But if you want, I can dig that up for you.
The problem of understanding the network congestion phenomenon as a whole is that one can not carry over intuitions from a single, multi hop, linear network of nodes to the global network congestion control problem.
[The reason a CDMA (wired) or CSMA (wireless) Ethernet has "collision-driven" exponential-random back off is the same rationale - it's important to de-bunch the various flows that are competing for the Ethernet segment. The right level of randomness creates local de-bunching (or pacing) almost as effectively as a perfect, zero-latency admission control that knows the rates of all incoming queues. That is, when a packet ends, all senders with a packet ready to transmit do so. They all collide, and back off for different times - de-bunching the packet arrivals that next time around. This may not achieve maximal throughput, however, because there's unnecessary blocking of newly arrived packets on the "backed-off" NICs - but fixing that is a different story, especially when the Ethernet is an internal path in the Internet as a whole - there you need some kind of buffer limiting on each NIC, and ideally to treat each "flow" as distinct "back-off" entity.]
The same basic idea - using collision-based back-off to force competing flows to de-bunch themselves - and keeping the end-to-end feedback loops very, very short by avoiding any more than the optimal buffering, leads to a network that can operate at near-optimal throughput *and* near-optimal latency.
This is what I've been calling in my own terminology, a "ballistic state" of the network - analogous to, but not the same as, a gaseous rather than a liquid or solid phase of matter. The least congested state that has the most fluidity, and therefore the highest throughput of individual molecules (rather than a liquid which transmits pressure very well, but does not transmit individual tagged molecules very fast at all).
That's what Kleinrock and his student showed. Counterintuitive though it may seem. (It doesn't seem counterintuitive to me at all, but many, many network engineers are taught and continue to think that you need lots of buffering to maximize throughput).
I conjecture that it's an achievable operating mode of the Internet based solely on end-to-end congestion-control algorithms, probably not very different from TCP, running over a network where each switch signals congestion to all flows passing through it. It's probably the most desirable operating mode, because it maximizes throughput while minimizing latency simultaneously. There's no inherent tradeoff between the two, except when you have control algorithms that don't deal with the key issues of bunching and unnecessarily slow feedback about "collision rate" among paths.
It would be instructive to try to prove, in a rigorous way, that this phase of network operation cannot be achieved with an end-to-end control algorithm. The proof of achievability or the contrary proof of non-achievability have not been produced. But there is no doubt that this is an achievable operational state if you have an Oracle who knows all future traffic, and can globally schedule traffic: performance is optimal, with no buffering needed.
Attempting to prove or disprove my conjecture would probably produce some really important insights.
The insight we already have is that adding buffering cannot increase throughput once the bottleneck links are in "double-buffering" state. That state is "Pareto optimal" in a serious sense.
On Sunday, May 25, 2014 8:18pm, "Mikael Abrahamsson" <swmike at swm.pp.se> said:
> On Sun, 25 May 2014, dpreed at reed.com wrote:
>
> > The optimum buffer state for throughput is 1-2 packets worth - in other
> > words, if we have an MTU of 1500, 1500 - 3000 bytes. Only the bottleneck
>
> No, the optimal state for througbut is to have huge buffers and have them
> filled. The optimal state for interactivity is to have very small buffers.
> FQ_CODEL tries to strike a balance between the two at 10ms of buffer. PIE
> does the same around 20ms. In order for PIE to work properly I'd say you
> need 50ms of buffering as a minimum, otherwise you're going to get 100%
> tail drop and multiple sequential drops occasionally (which might be
> desireable to keep interactivity good).
>
> My comment about 50ms is that you seldom need a lot more.
>
> --
> Mikael Abrahamsson email: swmike at swm.pp.se
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20140526/abaa7cd9/attachment-0001.html>
More information about the Cerowrt-devel
mailing list