[Cake] upstreaming cake in 2017?

Benjamin Cronce bcronce at gmail.com
Sat Dec 24 10:55:49 EST 2016


On Fri, Dec 23, 2016 at 3:53 AM, Jonathan Morton <chromatix99 at gmail.com>
wrote:

> >> As far as Diffserv is concerned, I explicitly assume that the standard
> RFC-defined DSCPs and PHBs are in use, which obviates any concerns about
> Diffserv policy boundaries.
> >
> >       ??? This comes close to ignoring reality. The RFCs are less
> important than what people actually send down the internet.
>
> What is actually sent down the Internet right now is mostly best-effort
> only - the default CS0 codepoint.  My inbound shaper currently shows 96GB
> best-effort, 46MB CS1 and 4.3MB “low latency”.
>
> This is called the “chicken and egg” problem; applications mostly ignore
> Diffserv’s existence because it has no effect in most environments, and CPE
> ignores Diffserv’s existence because little traffic is observed using it.
>
> To solve the chicken-and-egg problem, you have to break that vicious
> cycle.  It turns out to be easier to do that on the network side, creating
> an environment where DSCPs *do* have effects which applications might find
> useful.
>
> > coming up with a completely different system (preferable randomized for
> each home network) will make gaming the DSCPs much harder
>
> With all due respect, that is the single most boneheaded idea I’ve come
> across on this list.  If the effect of applying a given DSCP is
> unpredictable, and may even be opposite to the desired behaviour - or,
> equivalently, if the correct DSCP to achieve a given behaviour is
> unpredictable - then Diffserv will *never* be used by mainstream users and
> applications.
>
> >> Cake does *not* assume that DSCPs are trustworthy.  It respects them as
> given, but employs straightforward countermeasures against misuse (eg.
> higher “priority” applies only up to some fraction of capacity),
> >
> >       But doesn’t that automatically mean that an attacker can degrade
> performance of a well configured high priority tier (with appropriate
> access control) by overloading that band, which will affect the priority of
> the whole band, no? That might not be the worst alternative, but it
> certainly is not side-effect free.
>
> If an attacker wants to cause side-effects like that, he’ll always be able
> to do so - unless he’s filtered at source.  As a more direct counterpoint,
> if we weren’t using Diffserv at all, the very same attack would degrade
> performance for all traffic, not just the subset with equivalent DSCPs.
>
> Therefore, I have chosen to focus on incentivising legitimate traffic in
> appropriate directions.
>
> >> So, if Cake gets deployed widely, an incentive for applications to
> correctly mark their traffic will emerge.
> >
> >       For which value of “correct” exactly?
>
> RFC-compliant, obviously.
>
> There are a few very well-established DSCPs which mean “minimise latency”
> (TOS4, EF) or “yield priority” (CS1).  The default configuration recognises
> those and treats them accordingly.
>
> >> But almost no program uses CS1 to label its data as lower priority
>
> See chicken-and-egg argument above.  There are signs that CS1 is in fact
> being used in its modern sense; indeed, while downloading the latest Star
> Citizen test version the other day, 46MB of data ended up in CS1.  Star
> Citizen uses libtorrent, as I suspect do several other prominent games, so
> adding CS1 support there would probably increase coverage quite quickly.
>
> >> Cake also assumes in general that the number of flows on the link at
> any given instant is not too large - a few hundred is acceptable.
> >
> >       I assume there is a build time parameter that will cater to a
> specific set of flows, would recompiling with a higher value for that
> constant allow to taylor cake for environments with a larger number of
> concurrent flows?
>
> There is a compile-time constant in the code which could, in principle, be
> exposed to the kernel configuration system.  Increasing the queue count
> from 1K to 32K would allow “several hundred” to be replaced with “about ten
> thousand”.  That’s still not backbone-grade, but might be useful for a very
> small ISP to manage its backhaul, such as an apartment complex FTTP
> installation or a village initiative.
>

A few years back when reading about fq_Codel and Cake, one of the research
articles that I came across talked about how many flows are actually in a
buffer at any given time. They looked at the buffers of backbone links from
155Mb to 10Gb and they got the same numbers every time. While these links
may be servicing hundreds of thousands of active flows, at any given
instant there was fewer than 200 flows in the buffer, nearly all flows had
exactly one packet in the buffer, in the ballpark of 10 flows had 2 or more
packets in the buffer.

You could say the buffer follows the 80/20 rule. 20% of the flows in the
buffer comprise of 80% of the buffer. Regardless, the total number of flows
in the buffer is almost fixed. What was also interesting is the flows
consuming the majority of the buffer were always in flux. You would think
the same few flows that were consuming the buffer at one moment would
continue to, but that is not the case, TCP keeps them alternating.

When all is said and done, assuming your link is not horribly
buffer-bloated, and it shouldn't be in this discussion because we're
talking about fq_Codel/Cake, then there will probably be very little reason
to have 32k buckets, ever. Cake especially since it has "Ways".


>
>  - Jonathan Morton
>
> _______________________________________________
> Cake mailing list
> Cake at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20161224/2999659f/attachment-0001.html>


More information about the Cake mailing list