[Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat
Sebastian Moeller
moeller0 at gmx.de
Thu Oct 20 05:36:41 EDT 2022
Hi Stuart,
> On Oct 19, 2022, at 22:44, Stuart Cheshire via Rpm <rpm at lists.bufferbloat.net> wrote:
>
> On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire <cheshire at apple.com> wrote:
>
>> Accuracy be damned. The analogy to common experience resonates more.
>
> I feel it is not an especially profound insight to observe that, “people don’t like waiting in line.” The conclusion, “therefore privileged people should get to go to the front,” describes an airport first class checkin counter, Disney Fastpass, and countless other analogies from everyday life, all of which are the wrong solution for packets in a network.
>
>> I think the person with the cheetos pulling out a gun and shooting everyone in front of him (AQM) would not go down well.
>
> Which is why starting with a bad analogy (people waiting in a grocery store) inevitably leads to bad conclusions.
>
> If we want to struggle to make the grocery store analogy work, perhaps we show people checking some grocery store app on their smartphone before they leave home, and if they see that a long line is beginning to form they wait until later, when the line is shorter. The challenge is not how to deal with a long queue when it’s there, it is how to avoid a long queue in the first place.
[SM] That seems to be somewhat optimistic. We have been there before, short of mandating actually-working oracle schedulers on all end-points, intermediate hops will see queues some more and some less transient. So we can strive to minimize queue build-up sure, but can not avoid queues and long queues completely so we need methods to deal with them gracefully.
Also not many applications are actually helped all that much by letting information get stale in their own buffers as compared to an on-path queue. Think an on-line reaction-time gated game, the need is to distribute current world state to all participating clients ASAP. That often means a bunch of packets that can not reasonably be held back by the server to pace them out as world state IIUC needs to be transmitted completely for clients to be able to actually do the right thing. Such an application will continue to dump its world state burtst per client into the network as that is the required mode of operation. I think that there are other applications with similar requirements which will make sure that traffic stays burtsy and that IMHO will cause transient queues to build up. (Probably short duration ones, but still).
>
>> Actually that analogy is fairly close to fair queuing. The multiple checker analogy is one of the most common analogies in queue theory itself.
>
> I disagree. You are describing the “FQ” part of FQ_CoDel. It’s the “CoDel” part of FQ_CoDel that solves bufferbloat. FQ has been around for a long time, and at best it partially masked the effects of bufferbloat. Having more queues does not solve bufferbloat. Managing the queue(s) better solves bufferbloat.
[SM] Yes and no. IMHO it is the FQ part that gets greedy traffic off the back of those flows that stay below their capacity share, as it (unless overloaded) will isolate the consequence of exceeding one's capacity share to the flow(s) doing so. The AQM part then helps for greedy traffic not to congest itself unduly.
So for quite a lot of application classes (e.g. my world-state distribution example above) FQ (or any other type of competent scheduling) will already solve most of the problem, heck if ubiquitious it would even allow greedy traffic to switch to delay based CC methods that can help keeping queues small even without competent AQM at the bottlenecks (not that I recommend/endorse that, I am all for competent AQM/scheduling at the bottlenecks*).
>
>> I like the idea of a guru floating above a grocery cart with a better string of explanations, explaining
>>
>> - "no, grasshopper, the solution to bufferbloat is no line... at all".
>
> That is the kind of thing I had in mind. Or a similar quote from The Matrix. While everyone is debating ways to live with long queues, the guru asks, “What if there were no queues?” That is the “mind blown” realization.
[SM] However the "no queues" state is generally not achievable nor would it be desirable; queues have utility as "shock absorbers" and to help keeping a link busy***. I admit though that "no oversized queues" is far less snappy.
Regards
Sebastian
*) Which is why I am vehemently opposed to L4S, it offers neither competent scheduling nor competent AQM, in both regimes it is admittedly better then the current status quo of having neither but it falls short of the state of the art in both so much that deploying L4S today seems indefensible on technical grounds. And lo and behold one of L4S biggest proponents does so mainly on ideological grounds (just read "Flow rate fairness dismantling a religion" https://dl.acm.org/doi/10.1145/1232919.1232926 and then ask yourself whether you should trust such an author to make objective design/engineering choices after already tying himself to the mast that strongly**) but I digress...
**) I even have some sympathy for his goal of equalizing "cost" and not just simple flow rate, but I fail to see any way at all of supplying intermediate hops with sufficient and reliable enough information to do anything better than "aim to starve no flow". As I keep repeating, flow-queueing is (almost) never optimal, but at the same time it is almost never pessimal as it avoids picking winners and losers as much as possible (which in turn makes it considerably harder to abuse than other unequal rate distribution methods that rely on some characteristics of packet data).
***) I understand that one way to avoid queues is to keep ample capacity reserves so a link "never" gets congested, but that has some issues:
a) to keep a link at say at max 80% capacity there needs to be some admission control (or the aggregate ingress capacity needs to be smaller than the kink capacity) which really just moves the position around where a queue will form.
b) even then most link technologies are either 100% busy of 0 % so if two packets from two different ingress interfaces arrive simultaneously a micro-queue builds up as one packet needs to wait for the other to pass the link.
c) many internet access links for end users are still small enough that congestion can and will reliably happen by normal use-cases and traffic conditions; so as a user of such a link I need to deal with the queueing and can not just wish it away.
>
> Stuart Cheshire
>
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm
More information about the Rpm
mailing list