As it happens, I'm primarily trying to get to the root causes of babeld going flaky on me, and my primary purpose was to test babeld 1.8.3 while under load, but I have a few observations in this dataset. It is WAY easier if you just download the tarball and use flent to look at the various graphs. so: wget http://flent-fremont.bufferbloat.net/~d/babel-ecn.tgz or browse: http://flent-fremont.bufferbloat.net/~d/babel-ecn/ I started writing up things, with pictures, and so on, and that started to get long, and I want to finish blowing babel up (if I can) before I can safely deploy it so... The test was client <-> 100Mbit bottleneck <-> server with ecn on or off on cubic tcp. I ran most of them at a higher resolution (-s .02). See the included test script for more details. Running at this resolution required many minutes and 7GB's of memory to post-process the files. I need to go fix tc-iterate so I can get queue depths again. I'd also like to add irtt support to tcp_nup - I'm A) mostly interested in making babel fail, and B) tcp flows I did collect a ton of interesting statistics on the tcp flows. The tcp rtt CDFs, in particular. The files with no qdisc mentioned are cake. 0) I did the fq_codel_fast tests using memlimit 4M (the openwrt default, so far as I know(?) 1) At this workload, pie's ecn support gets disabled almost completely - the drop probability cracks 10% and it drops away... something like 370 packets marked vs 49000 dropped. * You can see cake doing quite well vis a vis fq_codel, but you can also see that the RTT is inflated in either case with ecn on vs off. * You can see the effects of hash collisions on codel - one test had a major hash collision on the measurement flow. * Probably the most interesting result (and one reason why I started fiddling with the bulk dropper code in the first place) was cake vs fq_codel_fast, attached. But: DO suck down these files and peruse for yourself. -- Dave Täht CTO, TekLibre, LLC http://www.teklibre.com