[Cake] A few puzzling Cake results
toke at toke.dk
Wed Apr 18 10:30:05 EDT 2018
Jonathan Morton <chromatix99 at gmail.com> writes:
>> On 18 Apr, 2018, at 2:25 pm, Toke Høiland-Jørgensen <toke at toke.dk> wrote:
>> So if there is one active bulk flow, we allow each flow to queue four
>> packets. But if there are ten active bulk flows, we allow *each* flow to
>> queue *40* packets.
> No - because the drain rate per flow scales inversely with the number
> of flows, we have to wait for 40 MTUs' serialisation delay to get 4
> packets out of *each* flow.
Ah right, yes. Except it's not 40 MTUs it's 40 quantums (as each flow
will only dequeue a packet each MTU/quantum rounds of the scheduler).
> Without that, we can end up with very high drop rates which, in
> ingress mode, don't actually improve congestion on the bottleneck link
> because TCP can't reduce its window below 4 MTUs, and it's having to
> retransmit all the lost packets as well. That loses us a lot of
> goodput for no good reason.
I can sorta, maybe, see the point of not dropping packets that won't
cause the flow to decrease its rate *in ingress mode*. But this is also
enabled in egress mode, where it doesn't make sense.
Also, the minimum TCP window is two packets including those that are in
flight but not yet queued; so allowing four packets at the bottleneck is
> So I do accept the increase in intra-flow latency when the flow count
> grows beyond the link's capacity to cope.
TCP will always increase its bandwidth above the link's capacity to
cope. That's what TCP does.
> It helps us keep the inter-flow induced latency low
What does this change have to do with inter-flow latency?
> while maintaining bulk goodput, which is more important.
No, it isn't! Accepting a factor of four increase in latency to gain a
few percents' goodput in an edge case is how we got into this whole
bufferbloat mess in the first place...
More information about the Cake