[Cake] perhaps pursuing sqrt(flows) one day?

Jonathan Morton chromatix99 at gmail.com
Wed Jun 3 11:47:13 EDT 2015


It's important to draw the right conclusions from a paper like that. They
ran their experiments using Reno and single drop-tail queues. As we know,
FQ and AQM both change those dynamics considerably. I'm also suspicious of
any approach which asks me to take a number of flows as an a priori
constant; we always need to handle the single flow case unless we can prove
otherwise.

It is however encouraging to see that the "imperfect" behaviour of a real
network (versus a virtual clocked simulator) erases the main argument
against pacing, that of synchronisation.  Also that the primary throughput
benefits are seen in the early phases of a connection, which is where we
currently see the biggest bursts (from slow start) and the biggest queues
under AQM.

They do conclude that a queue sized to BDP/sqrt(flows) is adequate. The
corollary, which is probably more practically valid, is that N^2 flows are
required to achieve maximum sustained throughput, given Reno and a dumb
buffer sized to BDP/N.

For FQ, that corresponds to BDP*(flows^-1.5) per queue. Since throughput is
also divided evenly between flows, that in turn corresponds to
RTT/sqrt(flows) sojourn time. Since I now track the number of active flows
directly in cake, it is possible to calculate this correction factor to the
Codel parameters when that number changes, using the existing inverse
square root code (and, for small numbers of flows, the cache).

Before I actually try that, however, I'd like to know whether the
"fishcake" iteration of cake works better in the cases you were concerned
about.

- Jonathan Morton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20150603/0739c5c4/attachment-0002.html>


More information about the Cake mailing list