<div dir="ltr"><div>Hello,<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 15, 2019 at 10:45 PM Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I still regard inbound shaping as our biggest deployment problem,<br>
especially on cheap hardware.<br>
<br>
Some days I want to go back to revisiting the ideas in the "bobbie"<br>
shaper, other days...<br>
<br>
In terms of speeding up cake:<br>
<br>
* At higher speeds (e.g. > 200mbit) cake tends to bottleneck on a<br>
single cpu, in softirq. A lwn article just went by about a proposed<br>
set of improvements for that:<br>
<a href="https://lwn.net/SubscriberLink/779738/771e8f7050c26ade/" rel="noreferrer" target="_blank">https://lwn.net/SubscriberLink/779738/771e8f7050c26ade/</a></blockquote><div>Will this help devices with a single core CPU?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
* Hardware multiqueue is more and more common (APU2 has 4). FQ_codel<br>
is inherently parallel and could take advantage of hardware<br>
multiqueue, if there was a better way to express it. What happens<br>
nowadays is you get the "mq" scheduler with 4 fq_codel instances, when<br>
running at line rate, but I tend to think with 64 hardware queues,<br>
increasingly common in the >10GigE, having 64k fq_codel queues is<br>
excessive. I'd love it if there was a way to have there be a divisor<br>
in the mq -> subqdisc code so that we would have, oh, 32 queues per hw<br>
queue in this case.<br>
<br>
Worse, there's no way to attach a global shaped instance to that<br>
hardware, e.g. in cake, which forces all those hardware queues (even<br>
across cpus) into one. The ingress mirred code, here, is also a<br>
problem. a "cake-mq" seemed feasible (basically you just turn the<br>
shaper tracking into an atomic operation in three places), but the<br>
overlying qdisc architecture for sch_mq -> subqdiscs has to be<br>
extended or bypassed, somehow. (there's no way for sch_mq to<br>
automagically pass sub-qdisc options to the next qdisc, and there's no<br>
reason to have sch_mq<br></blockquote><div><br></div><div>The problem I deal with is performance on even lower end hardware with a single queue. My experience with mq has been limited.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
* I really liked the ingress "skb list" rework, but I'm not sure how<br>
to get that from A to B.<br></blockquote><div><br></div><div>What was this skb list rework? Is there a patch somewhere?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
* and I have a long standing dream of being able to kill off mirred<br>
entirely and just be able to write<br>
<br>
tc qdisc add dev eth0 ingress cake bandwidth X<br></blockquote><div><br></div><div>Ingress on its own seems to be a performance hit. Do you think this would reduce the performance hit?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
* native codel is 32 bit, cake is 64 bit. I<br></blockquote><div><br></div><div>Was there something else you forgot to write here?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
* hashing three times as cake does is expensive. Getting a partial<br>
hash and combining it into a final would be faster.<br></blockquote><div><br></div><div>Could you elaborate how this would look, please? I've read the code a while ago. It might be that I didn't figure out all the places where hashing is done.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
* 8 way set associative is slower than 4 way and almost<br>
indistinguishable from 8. Even direct mapping<br></blockquote><div><br></div><div>This should be easy to address by changing the 8 ways to to 4. Was there something else you wanted to write here?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
* The cake blue code is rarely triggered and inline<br>
<br>
I really did want cake to be faster than htb+fq_codel, I started a<br>
project to basically ressurrect "early cake" - which WAS 40% faster<br>
than htb+fq_codel and add in the idea *only* of an atomic builtin<br>
hw-mq shaper a while back, but haven't got back to it.<br>
<br>
<a href="https://github.com/dtaht/fq_codel_fast" rel="noreferrer" target="_blank">https://github.com/dtaht/fq_codel_fast</a><br>
<br>
with everything I ripped out in that it was about 5% less cpu to start with.<br></blockquote><div><br></div><div>Perhaps further improvements made to the codel_vars struct will also help fq_codel_fast. Do you think this could be improved further?</div><div><br></div><div>A cake_fast might be worth a shot.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I can't tell you how many times I've looked over<br>
<br>
<a href="https://elixir.bootlin.com/linux/latest/source/net/sched/sch_mqprio.c" rel="noreferrer" target="_blank">https://elixir.bootlin.com/linux/latest/source/net/sched/sch_mqprio.c</a><br>
<br>
hoping that enlightment would strike and there was a clean way to get<br>
rid of that layer of abstraction.<br>
<br>
But coming up with how to run more stuff in parallel was beyond my rcu-foo.<br>
</blockquote></div></div>