So as promised I ran a bunch of tests with different variants of the MTU minimum queue occupancy scaling feature that we were discussing. I added instrumentation so the scaling can be turned on and off and set at various levels (see the test-instrumentation branch). I then ran tests with the minimum queue level (i.e., where the AQM turns off) set to, respectively: - 1 MTU - 2 MTU - 2 MTU x number of flows - 4 MTU - 4 MTU x number of flows The "x number of flows" tests are referred to as "scaling" in the dataset. I ran the RRUL test and 32-flow TCP download and upload tests on a setup where I installed Cake before the bottleneck link on upstream and on an IFB in 'ingress' mode. The other side of the bottleneck link was a TBF 1000-packet FIFO with the same bandwidth setting. Cake had Ethernet overhead compensation turned on, which kept the bottleneck under control (as you can see by the latency results). Full dataset is here: https://kau.toke.dk/experiments/cake/cake-mtuscale.tar.gz Takeaways (see attached plots): - The MTU scaling does indeed give a nice benefit in egress mode "tcp-download-totals" plot. From just over 6 Mbps to just over 8 Mbps of goodput on the 10 Mbit link. There is not a large difference between 2MTU and 4MTU, except that 4MTU hurts inter-flow latency somewhat. - The effect for upload flows (where Cake is before the bottleneck; 10mbit-upload.png) is negligible. - The MTU scaling really hurts TCP RTT (intra-flow latency; tcp-upload-tcprtt-10mbit.png and rrul-tcprtt.png). - For bidirectional traffic the combined effect is also negligible. Based on all this, I propose we change the scaling mechanism so that it is only active in egress mode, and change it from 4 MTUs to 2. I'll merge Kevin's patch to do this unless someone complains loudly :) If you want me to run other tests, let me know. -Toke