From: Kevin Darbyshire-Bryant <kevin@darbyshire-bryant.me.uk>
To: <cake@lists.bufferbloat.net>
Subject: Re: [Cake] second system syndrome
Date: Sun, 6 Dec 2015 18:21:12 +0000 [thread overview]
Message-ID: <56647C98.6000107@darbyshire-bryant.me.uk> (raw)
In-Reply-To: <CAA93jw6q_OtQpuGF5Fqx6WHmnRiHXwusot55CrhWi8wC14RKtw@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 5844 bytes --]
Comments:
My feeling is that we've been here (or close to here) before, and every
time there has been a 'just one more thing/feature' Columbo moment which
puts it all on hold again. The last time was 'dual flow isolation'.
Without wishing to stir too much that pot again I do think the 'dual
flow isolation', if I understand the intention correctly*, is a feature
that 'consumers' would find desirable.
I'm wondering what the hold up is and whether I can help. I personally
pledge £200 to the cake project. I know it's not much in terms of
hours/rate etc but please take it as a sign of how much I personally
want cake to move forward and realise my own limitations in doing so.
One feature/benefit that hasn't been measured yet is 'simplicity'. cake
offers a good shaper, fair qeueing, dscp washing, overhead/framing
calculation/compensation all in one pretty damn easy to configure package.
*trying to ensure fairness between hosts, not just between queues. I
think the main aim/thought is having a 'bittorrent' host isolated and
low priority from everything else.
On 06/12/15 14:53, Dave Taht wrote:
> I find myself torn by 3 things.
>
> 1) The number of huge wins in fixing wifi far outweigh what we have
> thus far achieved, or not achieved, in cake.
>
> 2) Science - Cake is like wet paint! There knobs to fiddle, endless
> tests to run, new ideas to try... measurements to take! papers to
> write!
>
> 3) Engineering - I just want it to be *done*. It's been too long. It
> was demonstrably faster than htb + fq_codel on weak hardware last
> june, and handled GRO peeling, which were the two biggest "bugs" in
> sqm I viewed we had.
>
> In wearing these 3 hats, I would
>
> 3A) like to drop cake, personally, from something I needed to care about.
> 3B) But, can't, because the profusion of features need to be fully evaluated.
> In this test series: http://snapon.cs.kau.se/~d/bcake_tests/ neither
> cake or bcake were "better" than the existing codel in any measurable
> way, and in most cases, worse. bcake did mildly better at a short
> (10ms) RTT... which was interesting.
>
> If you want to take apart this batch with "flent", looking for
> enlightenment, also, please go ahead.
>
> Were I to short circuit the science here, I'd rip out the sqrt cache
> and fold back in mainline codel into cake. This would also have the
> added benefit of also moving us back to 32bitland for various values
> (tho "now" becomes a bit trickier) and hopefully improving cpu
> efficiency a bit further (but this has to get done carefully unless
> your head is good at 32 bit overflow math)
>
> Next up, a series testing the fq portions...
>
> If someone (else) would like to fork cake again and do the two things
> above, I'd appreciate it.
>
> 3C) Most of the new statistics are pretty useless IMHO. Interesting,
> but in the end I mostly care about drops and marks only.
>
> 3D) Don't have a use for the rate estimator either, and so far the
> dual queue idea has not materialized. I understand how it might be
> done now - using the 8 way set associative thing per DEST hash, but I
> don't really see the benefit of that vs just using a DEST hash in the
> first place.
>
> 3E) Want cake to run as fast as possible on cheap hardware and be a
> demonstrable win over htb + fq_codel - and upstream it and be done
> with it.
>
> 3F) At the moment I'm favoring peeling at the current quantum rather
> than anything more elaborate.
>
> 3G) really want the thing to work correctly down to 64k and up to at
> least a gbit.
> which needs testing... but probably after we pick a codel....
>
> 2A) As a science vehicle, there are many other things we could be
> trying in cake, and I happen to like the idea of the (currently sort)
> cache in for example, trying a faster curve at startup - or, as in the
> ns2 code - a harder curve at say count + 128 or even earlier, as the
> speed up in drops gets pretty tiny even at count + 16. (see attached)
>
> (it doesn't make much sense to calculate the sqrt at run time - you
> can just calculate the constants elsewhere and plug them in, btw.
> attached is a teeny proggie that does that an also explores a harder
> initial curve (you'd jump count up to where it matched the curve when
> you reverted to the invsqrt calculation) - and no, I haven't tried
> plugging this in either... DANGER! Wet Paint!
>
> I also like keeping all the core values 64 bits, from a science perspective.
>
> There are also things like reducing the number of flows, and
> exercising the 8 way associative cache more - to say 256, 128, or even
> 32? Or relative to the bandwidth... or target setting...
>
> and I do keep wishing we could at the very least resolve the target >
> mtu issue. std codel turns off with a single mtu outstanding. That
> arguably should have been all that was needed...
>
> and then there's ecn...
>
> 1A) Boy do we have major problems with wifi, and major gains to be had
> 1B) All the new platforms have bugs eveyerhwer, including in the
> ethernet drivers
>
> 0)
>
> So I guess it does come down to - what are the "musts" for cake before
> it goes upstream? How much more work is required, by everybody, on
> every topic, til that happens? Can we just fork off what is known to
> work reasonably well, and let the rest evolve separately in a cake2?
> (cleaning up the api in the process?) Is it still "cake" if we do
> that?
>
> Because, damn it, 2016 is going to be the year of WiFi.
>
>
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> https://www.gofundme.com/savewifi
>
>
> _______________________________________________
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4816 bytes --]
prev parent reply other threads:[~2015-12-06 18:21 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-06 14:53 Dave Taht
2015-12-06 16:08 ` Sebastian Moeller
2015-12-07 12:24 ` Kevin Darbyshire-Bryant
2015-12-20 12:47 ` Dave Taht
2015-12-20 12:52 ` Dave Taht
2015-12-21 9:02 ` moeller0
2015-12-21 10:40 ` Dave Taht
2015-12-21 11:10 ` moeller0
2015-12-21 12:00 ` Dave Taht
2015-12-21 13:05 ` moeller0
2015-12-21 15:36 ` Jonathan Morton
2015-12-21 18:19 ` moeller0
2015-12-21 20:36 ` Jonathan Morton
2015-12-21 21:19 ` moeller0
[not found] ` <8737uukf7z.fsf@toke.dk>
2015-12-22 15:34 ` Jonathan Morton
2015-12-22 22:30 ` Kevin Darbyshire-Bryant
2015-12-23 11:43 ` Dave Taht
2015-12-23 12:14 ` Kevin Darbyshire-Bryant
2015-12-23 12:27 ` Jonathan Morton
2015-12-23 12:41 ` Dave Taht
2015-12-23 13:06 ` Jonathan Morton
2015-12-23 14:58 ` Dave Taht
2015-12-20 13:51 ` moeller0
2015-12-06 18:21 ` Kevin Darbyshire-Bryant [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56647C98.6000107@darbyshire-bryant.me.uk \
--to=kevin@darbyshire-bryant.me.uk \
--cc=cake@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox