[Cake] upstreaming cake in 2017?

Benjamin Cronce bcronce at gmail.com
Sat Dec 24 16:15:11 EST 2016


On Sat, Dec 24, 2016 at 11:22 AM, Jonathan Morton <chromatix99 at gmail.com>
wrote:

>
> > On 24 Dec, 2016, at 17:55, Benjamin Cronce <bcronce at gmail.com> wrote:
> >
> > What was also interesting is the flows consuming the majority of the
> buffer were always in flux. You would think the same few flows that were
> consuming the buffer at one moment would continue to, but that is not the
> case, TCP keeps them alternating.
>
> That sounds like the links are not actually congested on average, and the
> flows which temporarily collect in the buffer are due to transitory bursts
> - which is what you’d expect from a competently-managed backbone.  A
> flow-isolating AQM doesn’t really help there, though Cake should be capable
> of scaling up to 10Gbps on a modern CPU.
>
> Conversely, there have been well-publicised instances of congestion at
> peering points, which have had substantial impacts on performance.  I
> imagine the relevant flow counts there would be very much higher,
> definitely in the thousands.  Even well-managed networks occasionally
> experience congestion due to exceptional loads.
>

At least in my experience, most of the issues of congested peering is just
bufferbloat. You can get a 10Gb switch with 4GiB of buffer, which is like 4
seconds of buffer. Nearly every time I see someone talking about
congestion, you see pings increasing by 200ms+, many times in the thousands
of milliseconds. A "congested" link should show maybe 50ms of latency
increase with an increase in packetloss, but everyone knows how bad loss is
and bloats the buffers.

My argument is that an unbloated buffer has very few states in the buffer
at any given time. I would like to see some numbers from fq_Codel or Cake
about actual unique states at any given moment.


>
> The workings of DRR++ are also somewhat more subtle than simply counting
> the flows instantaneously in the buffer.  Each queue has a deficit, and in
> Cake an empty queue is not normally released from tracking a particular
> flow until the deficit has been repaid (by cycling through all the other
> flows and probably servicing them) and decaying the AQM state to rest,
> which may often take long enough for another packet to arrive for that flow.
>
> The number of active bulk flows can therefore exceed the number of packets
> actually in the queue.  This is especially true if the AQM is working
> optimally and keeping the queue almost empty on average.
>
> While fq_codel does not explicitly assign queues to specific flows (ie. to
> avoid hash collisions), the effects of hash collisions are similarly felt
> under the same circumstances, resulting in the colliding flows failing to
> receive their theoretical fair share of the link, even if they never have
> packets physically in the queue at the same time.
>
> With that said, both fq_codel and Cake should work okay with statistical
> multiplexing to handle exceptional flow counts.  In such cases, Cake’s
> triple-isolate feature should be turned off, by selecting either “hosts” or
> “flows” modes.  I could run an analysis to show how even the multiplexing
> should be.
>
>  - Jonathan Morton
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20161224/6552d11d/attachment.html>


More information about the Cake mailing list