[Cake] cake byte limits too high by 10x

Dave Taht dave.taht at gmail.com
Mon May 25 12:46:29 PDT 2015


On Sun, May 24, 2015 at 9:47 PM, Jonathan Morton <chromatix99 at gmail.com> wrote:
>
>> On 24 May, 2015, at 08:14, Dave Taht <dave.taht at gmail.com> wrote:
>>
>> at 100Mbit we had 5 megabytes of max queuing. I don't think this was
>> jonathon's intent, as the default if no rate was specified was 1Mbyte.
>>
>> … On the other hand codel does not
>> react fast enough to major bursts without engaging the out of
>> bufferspace cake_drop which is pretty darn cpu intensive.
>
> The 1-megabyte default is not intended to reflect the highest possible link rate.  Frankly, if you’re triggering cake_drop() without involving truly unresponsive flows, then the buffer limit is too small - you need to give Codel the time it needs to come up to speed.  I definitely should make cake_drop() more efficient, but that’s not the problem here.

At this point I think that codel is too un-responsive to deal with
tons of flows in slow start, even if fq'd. Pie does better.

So I disagree. I think dropping at a sane, and pretty small buffer
limit, is in-line with the theory and assumptions everyone has made
about the right sizes for buffering (which is usually in the range of
50-100 packets at most speeds!) and what I just did is a win over
excessive buffering - and a HUGE win on devices with many interfaces
and memory limitations. 300+k of buffering is quite a lot at these
speeds - the algorithm(s) settle down at about 90k at 100mbit, which
leaves plenty of room for finer levels of control and an outside limit
that is relatively sane. Fq_codel (with offloads at the 1000 packet
default) could hit 64Megabytes and even without offloads would
1.5mbytes.

A codel-ish algorithm that reacted faster on overload overall would be
nice, and being number of flow sensitive in cake seems like a partial
way to get there, and I have a few other ideas under test.

> I consider it *far* less important to control the short-term length of individual queues, compared to the latency observed by competing latency-sensitive traffic.

I share the opinion that fq is more important than aqm. AND: both are needed.

However there are many other objectives that need to be met also -
keeping the pipe filled and utilization high for starters. The ideal
buffer length is 1 packet, all the time, no idle periods. trying to
smooth out bursts. being resistant to attacks. etc. THIS STUFF IS
HARD!

we need better tools to look at it. In particular, examples of typical
workloads would be saner than engineering to the dslreports or flent
tests.

>
> And I also think that ECN and packet-drops are too crude a tool to control queue length to the extent you want.  Hence ELR - but that’s still in the future.  (How much funding do you think we can get for that?)

At the moment, I am merely hoping that demand for those doing the work
increases enough so that those potential funders (as opposed to
buyers) that need it most are willing to sink an investment into those
doing the work - and are satisified merely by having the ideas and
code in deployable form earlier.

Most of what I have fought personally is that with "employment" corps
expect to also own the ideas, and here there is no such possibility. I
have chosen to live in a yurt until the "problem" is solved enough to
move into hardware...

With open source methods, if corps stay maneuverable enough (as for
example free.fr, openwrt derivatives, and google fiber are), they can
capitalize on new inventions faster - but "owning the ideas" always
seems to end up being a higher priority in many mindsets.

There aren't enough minds on the planet, working together, to solve
these problems working alone.

>> On the
>> gripping hand there becomes no right outer limit for a wildly variable
>> 802.11ac wifi queue, with speeds from one to 1.5Gbit, but I sure would
>> like a cake-mq that handled the existing queues right with less
>> memory.
>
> Something that I haven’t had time to implement yet is a dual-mode FQ, performing both flow isolation and host isolation.  That seems like a step towards what wifi needs, as well as more directly addressing the case where swarms and sensitive traffic are running on different endpoints, without Diffserv assistance.

Pretty sure that we need way more information down at the mac80211 layer.

> However, the overall design I have in mind for a wifi-aware qdisc is a little bit inside-out compared to cake.  Cake goes:
>
> shaper -> priority -> flows -> signalling -> queues
>
> …or, with host isolation added:
>
> shaper -> priority -> hosts -> flows -> signalling -> queues
>
> What wifi needs is a bit different:
>
> (hosts/aggregates?) -> (shapers/minstrel?) -> priority -> flows -> signalling -> queues

Yep.

I don't see a huge need for "shaping" on wifi. I do see a huge need -
as a starting point - an airtime fair per station queue that is
aggregation aware, and slightly higher up, something that is aware of
the overall workload on the AP. You HAVE to accept some additional
delays in queuing when you have lots of stations. some of what I was
prototyping is applicable to that. I should start pushing branches for
each idea, I guess.


> Of course, since the priority layer is buried three+ layers deep, it’s obviously not going to use hardware 802.11e support.

I don't see a use for the QoS stuff as currently structured in cake.
Too high up.

>
>  - Jonathan Morton
>



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67


More information about the Cake mailing list