From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 359AC3B29D; Thu, 20 Oct 2022 05:36:57 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1666258602; bh=iz+vZ7L8gRCgW12GWh+pwzD+Z2/u6rm16IlnR/DSc1c=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=Y+ETj86PaIZlFAy/+IwBn2bsTT1LXl9LHvf1ATSwSY6TRd3npIKTBh79xAXWXmSdF 2doaAiNjOUKYpHLMrhbdF+fmhEwz8PEm0y6KdfR78V0aXtNG5rsUwRrwDJXa7xC39h dCr1aAonMb3zWRoTwnSav8FeOEvW1YXSXks2UGrM= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from smtpclient.apple ([79.234.22.28]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1Mnpru-1pRxSa00yo-00pJDa; Thu, 20 Oct 2022 11:36:42 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.1\)) From: Sebastian Moeller In-Reply-To: Date: Thu, 20 Oct 2022 11:36:41 +0200 Cc: =?utf-8?Q?Dave_T=C3=A4ht?= , Rpm , Make-Wifi-fast , Cake List , bloat Content-Transfer-Encoding: quoted-printable Message-Id: <9989D2F5-3A6A-454E-ABB8-71A29F3AAC0D@gmx.de> References: <938D9D45-DADA-4291-BD8A-84E4257CEE49@apple.com> To: Stuart Cheshire X-Mailer: Apple Mail (2.3696.120.41.1.1) X-Provags-ID: V03:K1:7RCXhoHP4IQHWZlt2QThMFjQWi+tifEIgIiyVpiY7Nh/f+DnjL/ sYBkgGvIF8eM3Del40g0RGLin+4+dHr0EjZnq2AN9GFzJ+AUwjagCXSdkgBTvj4h1IVy5fR bfMwsEvVKoB1G5Zj7yu9m3TWXPJOfTL7dn/YJbXWYvFCoXbI4u6wEyrVOECjQplu1PkyD7S fEpWrXTmYaryFcoYsMZMQ== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:F3T3gZFbWaY=:CZosSm2kr14tp/PvzQrAUR ThRuhcqwrfkRCN0DxGvm14pHOVrD/0CKLSJvZPdvWyu/sDXfPi+tKPvCtBuL1a8JagFOQRqF9 xeBdlC2nI5Mo1D1YIjGLf6j/DnADdAltpMQONzv83VCkDKruARRuopptma5VcSyfrm1LdIps5 Hc6N3RWrGbVLB8xzbWBsmQs/nq3z4Xxvh2daLIeA2KncvDiDYpR4ws3N05mHI1uVRZH73imcS la+9bwxMWsqfd0JEK2DDNd0ogQaF3163FgwyX+KAfklIVMcQgeN6l8TqR67K1kDRqIEEMaRrp xLGvW0r43DcXkpDCJoG/sTLp5ZmtWO/OBJlQebcEoUqTeD8vpGYXzefVC6/b/pn7S16C4WDG+ 8Dbj19zIw02FG5q6ACr3E/iaDaQ8/SmghVJ1dUC/NJ66Yq8L1br233fYiVlJTNzYDqHYB6BtN +KJp0G5Uk8RLv406So08aaEMHyM6uHbHjtxjh2wk/E289aTtIiUvSd5stQVHpMhbZmMFlfAQz KgQjosTdwe+x0BpTyo0JEOO9Eo38VQdqlyFZT3iqF+vfs/2X9A+++gT2oYbZ7ON/B2oWS6zlK iyaYeMhdZYx7DGux+JGbIhFnK5QrtqqxrMembSl4ruYjSmUg8uXJtUxVjSFdpJOD3RorbisXl IfGoRpTf90+2G3HN771OfUILOIdq7uU5WSSguYI4S1Kkey/0XZvWXKFK3932KCYShdi4mhVTc HB5fkWyzIMhQc2Tip5Fe/otxjc2HeDUWoQKyad2GdYCjdCXxRPCjFGOUl2BP1cI9wvGypC8jj a+/QOVsbc3LcHdSUfH3GnKXOL597o1YqrHkQx+hy9fEroRX7Z2aRL8FiRoru5scOBabYCJB/F n715b6jNj93y1dmIVg/6ADUxzTjAUTaGoWleSAGVXVIh5TVKBUn1dBqt6Ilzb7/TO4HbSEmw9 v+fxYh5CrOC4rHzMOM38qhz/FdiqMPLFCNLI+rOa92mLheUtoP1hNDYFqABOPWxAa2llN6WB4 wfS+iNkzRh8COngmRJ4afe4betGQjtEAPZFHhCTxqUEXahJ+Bkb++yHrE2GYGZ7TR7Rc+6cMX ja5MgGjNzDLOnhw5Xv594pHRPlOzTanJ3aWx0BUWeiiJYceQdC2W7yrVkxvGjbDQQKHW5jGBz Sjl7JpUbPagXj1kixFZkI5LAXSeIyEYVM/Rq2SZHY3FQ+KfJRcbAY+wgHTcpammeTFme9r2v4 Y7Kz3D/hdmn4XXXtKuAFKbQHLIhIL31Gr9CxyMFJYNZS16SxckdPFgHxT5iItt0hflxkWY2oq 89DjjXefeYuNlhlcYjxkxIe/UNGkq7haHaRJgCAt4tzu1cgEZy3J1WJiqFtOJpMBKbkhe2FgK 6rQcMPkDttjijMZbgsebS3a4JgtgbpMvxijI6LSjYXOCopfYoJrl+VZSAnv9k0J0t9X+9Oqsz GCDKtxsaiJQ33fk6lh8iL62gQW9mVzkLGpMNNNnU3ya9Q92dIUyh/kzN2j8PlGaS7tVjvdL50 QdZ0MVS69poQRq+pa1xNLNzrNYA7bAyQU0ICVNSt8Gihe Subject: Re: [Cake] [Rpm] [Make-wifi-fast] The most wonderful video ever about bufferbloat X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Oct 2022 09:36:57 -0000 Hi Stuart, > On Oct 19, 2022, at 22:44, Stuart Cheshire via Rpm = wrote: >=20 > On Mon, Oct 17, 2022 at 5:02 PM Stuart Cheshire = wrote: >=20 >> Accuracy be damned. The analogy to common experience resonates more. >=20 > I feel it is not an especially profound insight to observe that, = =E2=80=9Cpeople don=E2=80=99t like waiting in line.=E2=80=9D The = conclusion, =E2=80=9Ctherefore privileged people should get to go to the = front,=E2=80=9D describes an airport first class checkin counter, Disney = Fastpass, and countless other analogies from everyday life, all of which = are the wrong solution for packets in a network. >=20 >> I think the person with the cheetos pulling out a gun and shooting = everyone in front of him (AQM) would not go down well. >=20 > Which is why starting with a bad analogy (people waiting in a grocery = store) inevitably leads to bad conclusions. >=20 > If we want to struggle to make the grocery store analogy work, perhaps = we show people checking some grocery store app on their smartphone = before they leave home, and if they see that a long line is beginning to = form they wait until later, when the line is shorter. The challenge is = not how to deal with a long queue when it=E2=80=99s there, it is how to = avoid a long queue in the first place. [SM] That seems to be somewhat optimistic. We have been there before, = short of mandating actually-working oracle schedulers on all end-points, = intermediate hops will see queues some more and some less transient. So = we can strive to minimize queue build-up sure, but can not avoid queues = and long queues completely so we need methods to deal with them = gracefully. Also not many applications are actually helped all that much by letting = information get stale in their own buffers as compared to an on-path = queue. Think an on-line reaction-time gated game, the need is to = distribute current world state to all participating clients ASAP. That = often means a bunch of packets that can not reasonably be held back by = the server to pace them out as world state IIUC needs to be transmitted = completely for clients to be able to actually do the right thing. Such = an application will continue to dump its world state burtst per client = into the network as that is the required mode of operation. I think that = there are other applications with similar requirements which will make = sure that traffic stays burtsy and that IMHO will cause transient queues = to build up. (Probably short duration ones, but still). >=20 >> Actually that analogy is fairly close to fair queuing. The multiple = checker analogy is one of the most common analogies in queue theory = itself. >=20 > I disagree. You are describing the =E2=80=9CFQ=E2=80=9D part of = FQ_CoDel. It=E2=80=99s the =E2=80=9CCoDel=E2=80=9D part of FQ_CoDel that = solves bufferbloat. FQ has been around for a long time, and at best it = partially masked the effects of bufferbloat. Having more queues does not = solve bufferbloat. Managing the queue(s) better solves bufferbloat. [SM] Yes and no. IMHO it is the FQ part that gets greedy traffic = off the back of those flows that stay below their capacity share, as it = (unless overloaded) will isolate the consequence of exceeding one's = capacity share to the flow(s) doing so. The AQM part then helps for = greedy traffic not to congest itself unduly. So for quite a lot of application classes (e.g. my world-state = distribution example above) FQ (or any other type of competent = scheduling) will already solve most of the problem, heck if ubiquitious = it would even allow greedy traffic to switch to delay based CC methods = that can help keeping queues small even without competent AQM at the = bottlenecks (not that I recommend/endorse that, I am all for competent = AQM/scheduling at the bottlenecks*). >=20 >> I like the idea of a guru floating above a grocery cart with a better = string of explanations, explaining >>=20 >> - "no, grasshopper, the solution to bufferbloat is no line... at = all". >=20 > That is the kind of thing I had in mind. Or a similar quote from The = Matrix. While everyone is debating ways to live with long queues, the = guru asks, =E2=80=9CWhat if there were no queues?=E2=80=9D That is the = =E2=80=9Cmind blown=E2=80=9D realization. [SM] However the "no queues" state is generally not achievable = nor would it be desirable; queues have utility as "shock absorbers" and = to help keeping a link busy***. I admit though that "no oversized = queues" is far less snappy. Regards Sebastian *) Which is why I am vehemently opposed to L4S, it offers neither = competent scheduling nor competent AQM, in both regimes it is admittedly = better then the current status quo of having neither but it falls short = of the state of the art in both so much that deploying L4S today seems = indefensible on technical grounds. And lo and behold one of L4S biggest = proponents does so mainly on ideological grounds (just read "Flow rate = fairness dismantling a religion" = https://dl.acm.org/doi/10.1145/1232919.1232926 and then ask yourself = whether you should trust such an author to make objective = design/engineering choices after already tying himself to the mast that = strongly**) but I digress... **) I even have some sympathy for his goal of equalizing "cost" and not = just simple flow rate, but I fail to see any way at all of supplying = intermediate hops with sufficient and reliable enough information to do = anything better than "aim to starve no flow". As I keep repeating, = flow-queueing is (almost) never optimal, but at the same time it is = almost never pessimal as it avoids picking winners and losers as much as = possible (which in turn makes it considerably harder to abuse than other = unequal rate distribution methods that rely on some characteristics of = packet data). ***) I understand that one way to avoid queues is to keep ample capacity = reserves so a link "never" gets congested, but that has some issues: a) to keep a link at say at max 80% capacity there needs to be some = admission control (or the aggregate ingress capacity needs to be smaller = than the kink capacity) which really just moves the position around = where a queue will form. b) even then most link technologies are either 100% busy of 0 % so if = two packets from two different ingress interfaces arrive simultaneously = a micro-queue builds up as one packet needs to wait for the other to = pass the link. c) many internet access links for end users are still small enough that = congestion can and will reliably happen by normal use-cases and traffic = conditions; so as a user of such a link I need to deal with the queueing = and can not just wish it away. >=20 > Stuart Cheshire >=20 > _______________________________________________ > Rpm mailing list > Rpm@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/rpm