From: "Toke Høiland-Jørgensen" <toke@toke.dk>
To: Dave Taht <dave.taht@gmail.com>
Cc: Sebastian Moeller <moeller0@gmx.de>,
Cake List <cake@lists.bufferbloat.net>
Subject: Re: [Cake] tossing acks into the background queue
Date: Tue, 23 Nov 2021 16:49:31 +0100 [thread overview]
Message-ID: <874k82dg8k.fsf@toke.dk> (raw)
In-Reply-To: <CAA93jw7vcsH5XscyC_z1YCQ2-HD0X2dtmNSF6jQwj5Ygzqe46g@mail.gmail.com>
Dave Taht <dave.taht@gmail.com> writes:
> On Tue, Nov 23, 2021 at 2:39 AM Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>
>> Sebastian Moeller <moeller0@gmx.de> writes:
>>
>> > Hi Dave,
>> >
>> > On 23 November 2021 08:32:06 CET, Dave Taht <dave.taht@gmail.com> wrote:
>> >>The context of my question is basically this:
>> >>
>> >>Is cake baked? Is it done?
>> >
>> > How about per MAC address fairness (useful for ISPs and to treat
>> > IPv4/6 equally)?
>> >
>> > How about configurable number of queues (again helpful for ISPs)?
>>
>> FWIW I don't think CAKE is the right thing for ISPs, except in a
>> deployment where there's a single CAKE instance per customer. For
>> anything else (i.e., a single shaper that handles multiple customers),
>> you really need hierarchical policy enforcement like in a traditional
>> HTB configuration. And retrofitting this on top of CAKE is going to
>> conflict with the existing functionality, so it probably has to be a
>> separate qdisc anyway.
>
> What progress has been made on breaking the HTB locks in the last few
> years?
None. Don't see that happening any time soon; just the simple pfifo_fast
qdisc is uncovering all kinds of bugs when running in lockless mode.
Jesper basically solved the contention issue by partitioning the traffic
and running multiple instances:
https://github.com/xdp-project/xdp-cpumap-tc
Doesn't work for bandwidth sharing across instances, though, so it
solves the ISP "separate rates per customer" case, but not the CAKE
"shape a single link" case.
> Given the enormous number of hw tx/rx queues we see today (64+ on
> 10gbit), trying to charge off
> bandwidth per queue in a cake-derived shaper and protecting the merge
> with rcu seemed plausible...
Yeah, that was what I was going to try, but it turned out to be
decidedly non-trivial to make sch_cake itself mq-aware, so I gave up. My
hope is that this will be possible once we get sch_bpf, so we can just
have separate instances but they can share a single atomic var for the
bandwidth sync...
-Toke
next prev parent reply other threads:[~2021-11-23 15:49 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-23 5:03 Dave Taht
2021-11-23 7:07 ` Sebastian Moeller
2021-11-23 7:17 ` Dave Taht
2021-11-23 7:32 ` Dave Taht
2021-11-23 7:33 ` Dave Taht
2021-11-23 8:06 ` Sebastian Moeller
2021-11-23 8:27 ` Dave Taht
2021-11-23 9:03 ` Sebastian Moeller
2021-11-23 10:39 ` Toke Høiland-Jørgensen
2021-11-23 11:31 ` Sebastian Moeller
2021-11-23 12:12 ` Toke Høiland-Jørgensen
2021-11-23 15:12 ` Dave Taht
2021-11-23 15:49 ` Toke Høiland-Jørgensen [this message]
2021-11-23 7:35 ` Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=874k82dg8k.fsf@toke.dk \
--to=toke@toke.dk \
--cc=cake@lists.bufferbloat.net \
--cc=dave.taht@gmail.com \
--cc=moeller0@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox