[Cake] the meta problem of memory, locks, and multiple cpus
Jesper Dangaard Brouer
brouer at redhat.com
Thu Oct 29 07:00:38 EDT 2015
On Thu, 29 Oct 2015 10:08:34 +0100
Dave Taht <dave.taht at gmail.com> wrote:
> The real problem A) that I wish we could solve at the qdisc layer
>
> is that we could size everything more appropriately if we actually
> *knew* the physical link rate in many cases. It is *right there* in
> the sys filesystem for ethernet devices, if only there was some way to
> get at it, and get notifications if it changed.... or merely to know
> the underlying operating range of the device at qdisc init time. It is
> dumb to allow 50MB of memory for a 10mbit device just to make the
> 10GigE folk happy.
Well, do you really need to know the link rate? One of the lessons
learned from codel (by Kathleen/Van) is that we should not trie to
measure the link rate, but instead simply look at the time spend in
queue.
> the other meta problem is getting to where we can outperform mq + cake
> somehow on a cpu basis - I dislike intensely the 8 hardware and BQL
> queues in the mvneta driver and elsewhere, they totally clobber
> latency.
Another important point Van Jacobson make is that, time spend in queue
is multi-queue agnostic. If another HW queue is blocking you, then you
will still see that your time spend in queue is longer.
I've tried to sum this up in my talk:
"Beyond the existences of Bufferbloat, have we found the cure?"
https://youtu.be/BLCd__1mPz8?t=25m
> If there was some sort of saner locking structure that would instead
> let us repurpose those 8 hardware queues to be QoS only and let us let
> cake get run on multiple cpus in those cases.....
Locking in the qdisc layer is a b*tch... but I do think the current
scheme should allow you to use several HW queues, for your different QoS
classes.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
More information about the Cake
mailing list