[Cake] A few puzzling Cake results

Jonathan Morton chromatix99 at gmail.com
Wed Apr 18 11:03:26 EDT 2018


>>> So if there is one active bulk flow, we allow each flow to queue four
>>> packets. But if there are ten active bulk flows, we allow *each* flow to
>>> queue *40* packets.
>> 
>> No - because the drain rate per flow scales inversely with the number
>> of flows, we have to wait for 40 MTUs' serialisation delay to get 4
>> packets out of *each* flow.
> 
> Ah right, yes. Except it's not 40 MTUs it's 40 quantums (as each flow
> will only dequeue a packet each MTU/quantum rounds of the scheduler). 

The maximum quantum in Cake is equal to the MTU, and obviously you can't increase the drain rate by decreasing the quantum below the packet size.

>> Without that, we can end up with very high drop rates which, in
>> ingress mode, don't actually improve congestion on the bottleneck link
>> because TCP can't reduce its window below 4 MTUs, and it's having to
>> retransmit all the lost packets as well.  That loses us a lot of
>> goodput for no good reason.
> 
> I can sorta, maybe, see the point of not dropping packets that won't
> cause the flow to decrease its rate *in ingress mode*. But this is also
> enabled in egress mode, where it doesn't make sense.

I couldn't think of a good reason to switch it off in egress mode.  That would improve a metric that few people care about or can even measure, while severely increasing packet loss and retransmissions in some situations, which is something that people *do* care about and measure.

> Also, the minimum TCP window is two packets including those that are in
> flight but not yet queued; so allowing four packets at the bottleneck is
> way excessive.

You can only hold the effective congestion window in NewReno down to 2 packets if you have a 33% AQM signalling rate (dropping one packet per RTT), which is hellaciously high if the hosts aren't using ECN.  If they *are* using ECN, then goodput in ingress mode doesn't depend inversely on signalling rate anyway, so it doesn't matter.  At 4 packets, the required signalling rate is still pretty high (1 packet per 3 RTTs, if it really does go down to 2 MTUs meanwhile) but a lot more manageable - in particular, it's comfortably within the margin required by ingress mode - and gets a lot more goodput through.

We did actually measure the effect this had in a low-inherent-latency, low-bandwidth environment.  Goodput went up significantly, and peak inter-flow latency went *down* due to upstream queuing effects.

>> So I do accept the increase in intra-flow latency when the flow count
>> grows beyond the link's capacity to cope.
> 
> TCP will always increase its bandwidth above the link's capacity to
> cope. That's what TCP does.
> 
>> It helps us keep the inter-flow induced latency low
> 
> What does this change have to do with inter-flow latency?
> 
>> while maintaining bulk goodput, which is more important.
> 
> No, it isn't! Accepting a factor of four increase in latency to gain a
> few percents' goodput in an edge case is how we got into this whole
> bufferbloat mess in the first place...

Perhaps a poor choice of wording; I consider *inter-flow latency* to be the most important factor.  But users also consider goodput relative to link capacity to be important, especially on slow links.  Intra-flow latency, by contrast, is practically invisible except for traffic types that are usually sparse.

As I noted during the thread Kevin linked, Dave originally asserted that the AQM target should *not* depend on the flow count, but the total number of packets in the queue should be held constant.  I found that assertion had to be challenged once cases emerged where it was clearly detrimental.  So now I assert the opposite: that the queue must be capable of accepting a minimum number of packets *per flow*, and not just transiently, if the inherent latency is not greater than what corresponds to the optimal BDP for TCP.

This tweak has zero theoretical effect on inter-flow latency (which is guaranteed by the DRR++ scheme, not the AQM), but can improve goodput and sender load at the expense of intra-flow latency.  The practical effect on inter-flow latency can actually be positive in some scenarios.

Feel free to measure.  Just be aware of what this is designed to handle.

And obviously I need to write about this in the paper...

 - Jonathan Morton



More information about the Cake mailing list