We could reliably say that most of the cost comes from DRR.

FQ based on virtual times, such as start time, would cost O(log N) where N is the number of packets in the queuing system.
DRR as designed by Varghese provided a good approximation with O(1) cost. So you're not wrong Dave
at least for DRR. But I don't see any other cost to add to the check list.
Of course every algorithm can come with a different constant in terms of cost but that is really implementation dependent.

SFQ in Linux is using Longest Queue Drop which brings back non constant delay cost because
it has to search the longest queue, which give O(log F) (worst case) where F is the number of active flows (with at least one packet in the queue).
Smaller than start time fair queuing.

But DRR, as implemented in fq_codel, does not do that as there is a single AQM per queue.
Which brings more cost in terms of memory but not in terms of time.

I'm not sure about the DRR implementation in CAKE, but there should be no differences in terms of complexity.


M. Shreedhar and G. Varghese, "Efficient fair queuing using deficit round-robin," in 
IEEE/ACM Transactions on Networking, vol. 4, no. 3, pp. 375-385, June 1996. doi: 10.1109/90.502236
https://www2.cs.duke.edu/courses/spring09/cps214/papers/drr.pdf


On Mon, Aug 26, 2019 at 6:28 AM Dave Taht <dave.taht@gmail.com> wrote:
In my rant on nqb I misspoke on something, and I feel guilty (for the
accidental sophistry) and want to express it better next time.

https://mailarchive.ietf.org/arch/msg/tsvwg/hZGjm899t87YZl9JJUOWQq4KBsk

I said:

"Whether you have 1 queue or thousands in a fq'd system, the code is
the same as is the "complexity" for all intended
purposes."

But I'm wrong about the "complexity" part of that statement,
particularly if you are thinking about pure hardware. pie/codel are
O(1) for purely marked traffic. For dropping, well, it's easier to
reason about random drop probabilities and extrapolate out to some
number of loops to bound at some value (?) where you just give up and
deliver the packet, based on however much budget you have between
packets in the hw. (we have so much budget in the bql and wifi world
I've never cared) It's harder to reason about codel, but you can still
have a bounded loop if you want one.

fq_codel is selecting a queue to dequeue - so it's not O(1) for
finding that queue.  Selecting the right queue can take multiple loops
through the whole queue spaces, based on the quantum, and then on top
of that you have the drop decisionmaking,
so you have best case (1), worst case (?) and average/median, whatever....

So if you wanted to put a bound on it (say, you were writing in ebpf
or the hw) "for the time spent finding a packet to deliver",
how would you calculate a good time to give up in any of these cases
(pie, codel, fq_codel, pick another fq algo...), and just deliver a
packet.

(my gut says 6-11 loops btw and it's not tellling me why)

But if you bounded the loop seeking the right queue what would happen?

But if you bounded the loop, as to giving up on the drop decision what
would happen?

This is giving me flashbacks to "the benefit of drop tail" back in 2012-2014.

--

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
_______________________________________________
Ecn-sane mailing list
Ecn-sane@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/ecn-sane