[Cake] packet mass, ecn, and a fractional count

Dave Taht dave.taht at gmail.com
Sat May 9 12:37:12 EDT 2015


On Thu, May 7, 2015 at 10:12 PM, Jonathan Morton <chromatix99 at gmail.com> wrote:
>
>> On 8 May, 2015, at 05:32, Dave Taht <dave.taht at gmail.com> wrote:
>>
>> from observing behaviors with large numbers of flows, fq_codel derived
>> algorithms start to struggle to achieve the desired delay, cake
>> seemingly less so, and perhaps this is related to collisions also.
>>
>> A thought would be to use a fractional (fixed point) increment to
>> count rather than "1" when larger numbers of flows are present.
>
> Given that cake handles this extreme case better than average already, I’m not particularly concerned about trying to improve it further.  Adding set-associative hashing (or red-black trees for perfect isolation, if you prefer) to other FQ qdiscs might be a better idea than fudging codel.

:)

A larger point behind me sending, what? 20 or so speculative ideas to
the cake mailing list is the hope to inspire better research than, for
example, the recent "hard limit codel" paper. I would hope half
someone gets around to exploring one day.

The largest point though is that achieving 5-20ms of queue delay is
quite a dang lot when the path is, say, gige few hundred us.

The "right" amount of buffering is *1* packet, all the time (the goal
is nearly 0 latency with 100% utilization). We are quite far from
achieving that on anything with multiple barriers in the way in
hardware and software, but it is helpful to keep that in mind.... and
try to find people that might want to sink the time into exploring
those problems also...

> There’s a reasonably straightforward answer for why flow collisions might cause worse AQM behaviour.  Assume a situation with a very large number of flows, so adding one more flow doesn’t change the throughput of existing flows much.  Now assume that most queues have just one flow assigned, but there are a few with two each.  (Ignore the possibility of more, for simplicity.)
>
> The flows assigned to the doubled queues will have half the throughput each, compared to those in single queues.  This also means that only half the packets are available for sending congestion signals on, and since codel marks packets on a fixed schedule once it is triggered, each flow will therefore receive only half the congestion signals.  *BUT* each flow still gets the same IW, so a doubled queue is twice as full as singles to begin with.
>
> So with perfect flow isolation (and a separate codel per queue), codel’s signalling rate naturally scales with the number of flows.  With collisions, that doesn’t happen so reliably; it is at best a sublinear scaling.  Without FQ at all, representing the pessimal collisions case, codel has to wait for its signalling rate to grow over time in order to match the number of flows, so it won’t react as quickly to an abruptly imposed load.

Agree, and i think this could be reworked into a better explanation as
to why fq_codel and derivatives work so much better than single queue
aqms.

And in part (tracy widom) due to this better flow isolation we can
scale up the drop/mark rate mildly faster and achieve lower latency
faster when more flows are in play.
>
>  - Jonathan Morton
>



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



More information about the Cake mailing list