Cake - FQ_codel the next generation
 help / color / mirror / Atom feed
From: Dave Taht <dave.taht@gmail.com>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: Eric Luehrsen <ericluehrsen@hotmail.com>,
	 "cake@lists.bufferbloat.net" <cake@lists.bufferbloat.net>
Subject: Re: [Cake] [LEDE-DEV] Cake SQM killing my DIR-860L - was: [17.01] Kernel: bump to 4.4.51
Date: Thu, 2 Mar 2017 22:21:56 -0800	[thread overview]
Message-ID: <CAA93jw7WLR-ghZPYGofiZ-QAY4uCTDJFvvuB4GSQ75P4pYHC-Q@mail.gmail.com> (raw)
In-Reply-To: <D59F3712-415F-4DB7-A18E-1E55C1461723@gmail.com>

As this is devolving into a cake specific discussion, removing the
lede mailing list.

On Thu, Mar 2, 2017 at 9:49 PM, Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 3 Mar, 2017, at 07:00, Eric Luehrsen <ericluehrsen@hotmail.com> wrote:
>>
>> That's not what I was going for. Agree, it would not be good to depend
>> on an inferior hash. You mentioned divide as a "cost." So I was
>> proposing a thought around a "benefit" estimate. If hash collisions are
>> not as important (or are they), then what is "benefit / cost?"
>
> The computational cost of one divide is not the only consideration I have in mind.
>
> Cake’s set-associative hash is fundamentally predicated on the number of hash buckets *not* being prime, as it requires further decomposing the hash into a major and minor part when a collision is detected.  The minor part is then iterated to try to locate a matching or free bucket.
>
> This is considerably easier to do and reason about when everything is a power of two.  Then, modulus is a masking operation, and divide is a shift, either of which can be done in one cycle flat.
>
> AFAIK, however, the main CPU cost of the hash function in Cake is not the hash itself, but the packet dissection required to obtain the data it operates on.  This is something a profile would shed more light on.

Tried. Mips wasn't a good target.

The jhash3 setup cost is bad, but I agree flow dissection can be
deeply expensive. As well as the other 42+ functions a packet needs to
traverse to get from ingress to egress.

But staying on hashing:

One thing that landed 4.10? 4.11? was fq_codel relying on a skb->hash
if one already existed (injected already by tcp, or by hardware, or
the tunneling tool). we only need to compute a partial hash on the
smaller subset of keys in that case (if we can rely on the skb->hash
which we cannot do in the nat case)

Another thing I did, long ago, was read the (60s-era!) liturature
about set-associative cpu cache architectures... and...

In all of these cases I really, really wanted to just punt all this
extra work to hardware in ingress - computing 3 hashes can be easily
done in parallel there and appended to the packet as it completes.

I have been working quite a bit more with the arm architecture of
late, and the "perf" profiler over there is vastly better than the
mips one we've had.

(and aarch64 is *nice*. So is NEON)

- but I hadn't got around to dinking with cake there until yesterday.

One thing I'm noticing is that even the gigE capable arms have weak or
non-existent L2 caches, and generally struggle to get past 700Mbits
bidirectionally on the network.

some quick tests of pfifo vs cake on the "lime-2" (armv7 dual core) are here:

http://www.taht.net/~d/lime-2/

The rrul tests were not particularly pleasing. [1]

...

A second thing on my mind is to be able to take advantage of A) more cores

... and B) hardware that increasingly has 4 or more lanes in it.

1)  Presently fq_codel (and cake's) behavior there when set as a
default qdisc is sub-optimal - if you have 64 hardware queues you end
up with 64 instances, each with 1024 queues. While this might be
awesome from a FQ perspective I really don't think the aqm will be as
good. Or maybe it might be - what happens with 64000 queues at
100Mbit?

2) It's currently impossible to shape network traffic across cores.
I'd like to imagine that with a single atomic exchange or sloppily
shared values shaping would be feasible.

(also softirq is a single thread, I believe)

3) mq and mqprio are commonly deployed on the high end for this.

So I've thought about doing up another version - call it - I dunno -
smq - "smart multi-queue" - and seeing how far we could get.

>  - Jonathan Morton
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake



[1] If you are on this list and are not using flent, tough. I'm not
going through the trouble of generating graphs myself anymore.

-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org

  reply	other threads:[~2017-03-03  6:21 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <e955b05f85fea5661cfe306be0a28250@inventati.org>
     [not found] ` <07479F0A-40DD-44E5-B67E-28117C7CF228@gmx.de>
     [not found]   ` <1488400107.3610.1@smtp.autistici.org>
     [not found]     ` <2B251BF1-C965-444D-A831-9981861E453E@gmx.de>
     [not found]       ` <1488484262.16753.0@smtp.autistici.org>
2017-03-02 21:10         ` Dave Täht
2017-03-02 23:16           ` John Yates
2017-03-03  0:00             ` Jonathan Morton
2017-03-02 23:55           ` John Yates
2017-03-03  0:02             ` Jonathan Morton
2017-03-03  4:31               ` Eric Luehrsen
2017-03-03  4:35                 ` Jonathan Morton
2017-03-03  5:00                   ` Eric Luehrsen
2017-03-03  5:49                     ` Jonathan Morton
2017-03-03  6:21                       ` Dave Taht [this message]
2017-03-06 13:30                         ` Benjamin Cronce
2017-03-06 14:44                           ` Jonathan Morton
2017-03-06 18:08                             ` Benjamin Cronce
2017-03-06 18:46                               ` Jonathan Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAA93jw7WLR-ghZPYGofiZ-QAY4uCTDJFvvuB4GSQ75P4pYHC-Q@mail.gmail.com \
    --to=dave.taht@gmail.com \
    --cc=cake@lists.bufferbloat.net \
    --cc=chromatix99@gmail.com \
    --cc=ericluehrsen@hotmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox