From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 50D8A3CB35 for ; Tue, 23 Nov 2021 06:32:03 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1637667119; bh=31rir3uHMvAu9T5O0tos5XFQzGk+sJTbrSHk0wls5gw=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=WGYYh77Kb7vMpsBvgJDphR4UACCobJwOjReJBj6ptlNYYOyIqiLVjq+KMxuHxMjA8 iMuJUuvS6ATjE2esDKjwO1g+gyLhvgdbyOsVWCo0iigmyMeVjMQE/eawf9pFd8Ljyu hHxV/KLNxB/qQoptWsSR78ntFbp0GLy3C28z/zmw= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from smtpclient.apple ([134.76.241.253]) by mail.gmx.net (mrgmx104 [212.227.17.168]) with ESMTPSA (Nemesis) id 1N4hzj-1mgF8C1wrn-011gnR; Tue, 23 Nov 2021 12:31:59 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) From: Sebastian Moeller In-Reply-To: <87czmrcg0f.fsf@toke.dk> Date: Tue, 23 Nov 2021 12:31:56 +0100 Cc: =?utf-8?Q?Dave_T=C3=A4ht?= , Cake List Content-Transfer-Encoding: quoted-printable Message-Id: <3F51069A-D50B-4C09-AF16-FB9AA9E8D59C@gmx.de> References: <67BC6CC2-F088-4C0D-8433-A09F4AC452FE@gmx.de> <87czmrcg0f.fsf@toke.dk> To: =?utf-8?Q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Provags-ID: V03:K1:Hn6kj44IUCxY2nVCX4Lg3PgfCzQrO9o20T4BW8MbOzw9TM6AJ5r KS+fNjRIjruxU9UKhuVfvA1a45awYWu7fQxoGxyKFvTaPFZdYwz4aogR5RtWl3dz5ANHnfW UBqZGzXmL3Sr36bNAkpWFe7WQEoXgu1ejK6Lyj+uT769Et6lTiItB4A97Rb5ah18/1DDfS6 hwhFnFSCPfb4/j8O2zqQg== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:u2MA2qPhvbM=:NOo5oK7AbZapyk2a7vSrCH h0hGD4psKdlJWJYoT5kiYSz/jJg+gByW7o/5iViHnHRYIUXKPWyMeMFRFyOa05XcbA3Qqgkcy SYVSnsYj7SeJswLBgDn56xwuyzXGcXZpJ06YEQhhNoJxegBjDjVotgpHoQBm0dX9nFX+USjLR tu0NqrGwT1FEcDecLJ4cLQ5mGSghpFF3ga4Gb/Gz9cZ2Qb3gmKwUghb8LB97RmCy2nWkkHaCY +MTds392//EmJSVvIDCKHVJDwKMXUUPHDf/alFyYopi0MuA38gPlBZo2MmQn89uChYukPZ7tq cMcX6PALynCRCMxxrXWsYyGjEvjlQl7s2N/dwXUXtZa1lLPRC4AHgT1YI/nYrrGc1jjvIi8VN 5T3XbE3nFd8rEY7KWT8KFCqJYm3bGtTOzPP291PSWt/qLuhAIqWtuB2ncH4NpRBeM5K6AN+6Z bYBLtTcs/8BM7NGD87yksCS8+YBINUbDbYU2D5iRW0b+XvjfM9ZzAp9proHzVguIySylmInXs JIC3w8GsDu6ja2AOPMtfY07NhzOt0ZeorfeLF6MXq/0LLlyEDQFMiHXm6n/z0Muad+YT+6jGx v9fmKQkWhTkchWYwmLUp7SQa0olXmAMsivqtDrbvgekz2cL49gU5YNX/5euLQU/Df3obzn3sU lmnAA5Jxn4KdorcuzGfJf9FYJitb5cw2affPkxunsLgHzO6sEcfzkiCBjQDaShJyyxjkD9GcA 2MGV6u4SD15c0221e+cO1XTFbZgo3NYTqtZ8x1yMksSkrHoYEILIU+Tpo3zeLSZTiCOBIuqVi tCNG7Qzgl1u+NuxQVXHN/vYqYK9LUAAMlseSPfMNWRFsDhNLWZanxiFsf3WTNFhV6R0rSpg1c +SIrL6ac9LwOq3hD8t6Zw/rGCnLIiWdJCpshYp72AGAfTic14XhmWEVT2IIz520tGsQOUAj4P BjaLoHNKZh+iD2lYZWDhkQ08nJEKM8HPylOksrAlNJavYeUlhglD8V+2XeS2UX1/7QcIMQ1UL 3veIFV7wtTIxuW49glAAC3l0f14HXsVxCixXF0qZKxiFA5u4m2BQy73Ea3okQ5lbtHZFx62FA 0Wc3ekD1jiT/9I= Subject: Re: [Cake] tossing acks into the background queue X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Nov 2021 11:32:03 -0000 Hi Toke, > On Nov 23, 2021, at 11:39, Toke H=C3=B8iland-J=C3=B8rgensen = wrote: >=20 > Sebastian Moeller writes: >=20 >> Hi Dave, >>=20 >> On 23 November 2021 08:32:06 CET, Dave Taht = wrote: >>> The context of my question is basically this: >>>=20 >>> Is cake baked? Is it done? >>=20 >> How about per MAC address fairness (useful for ISPs and to treat >> IPv4/6 equally)? >>=20 >> How about configurable number of queues (again helpful for ISPs)? >=20 > FWIW I don't think CAKE is the right thing for ISPs, except in a > deployment where there's a single CAKE instance per customer. Fair point. My other reason for wanting to expose this is to allow = easier experimentation, but I can be expected to build from modified = sources so that is rather weak. > For > anything else (i.e., a single shaper that handles multiple customers), > you really need hierarchical policy enforcement like in a traditional > HTB configuration. And retrofitting this on top of CAKE is going to > conflict with the existing functionality, so it probably has to be a > separate qdisc anyway. I had sort of ignored that ISPs generally do not offer, fair = sharing of a link's capacity between all connected users ;) >=20 >> IMHO cake works pretty well, with the biggest issue being its CPU >> demands. As far as I understand however, that is caused by the shaper >> component and there low latency and throughput are in direct >> competition, if we want to lower the CPU latency demands we need to >> allow for bigger buffers that keep the link busy even if cake itself >> is not scheduled as precisely as we would desire or as e.g. BQL >> requires. >=20 > Yes, as link speed increases, batching needs to increase to keep up. Yes, all the way through the stack. > This does not *have* to impact latency, as the faster link should keep > the granularity constant in the time domain. Nit-pick: any batching impacts latency compared to perfect just = in time processing, just some impact can easily be accepted/tolerated ;) > So experimenting with doing > this dynamically in CAKE might be worthwhile, but probably not = trivial. We tried to do the same for HTB/fq_codel and testing was a bit = inconclusive (then again, affected users where not to dedicated in = testing) > And either way, CAKE is still going to be limited by being single core > only, and fixing that requires some serious surgery that I seem to > recall looking into and giving up at some point :( That is sad, and pretty much rules out that I could make some = progress in that direction. The next level is shaping at ~1Gbps, even = though faster access links become available, like 8.5/10 Gbps (XGS-PON = is nominally 10G, but after FEC only ~8.5 Gbps actually are usable) or = for a lucky few even 25 Gbps ... Regards Sebastian >=20 > -Toke