On Mon, Aug 4, 2014 at 8:44 AM, Baptiste Jonglez wrote: > It's not very new, but I don't remember seeing this discussed here: > > http://web.mit.edu/remy/ I don't remember if it was discussed here or not. It certainly was a hit at ICCRG ietf before last. > It mentions Bufferbloat and CoDel (look at the FAQ). Yes they did some of their experiments against the ns2 model of sfq-codel to use as a reference for the best of the then-available aqm and FQ technologies. They also compared XCP as an example of a high performing TCP alternative. They have a new paper out for the upcoming sigcomm: http://web.mit.edu/keithw/www/Learnability-SIGCOMM2014.pdf And a bit of discussion here: https://news.ycombinator.com/item?id=8129115 I do not doubt that one day computer assisted methods such as remy will one day supplant or replace humans in many aspects of the CS field and hardware designs. I well remember the state of the mp3 research back in the 80s when it took a day to crunch down a rock song into something replete with dolphin warbles, and how that eventually turned out. I adore the remy work, the presentations, and the graphs (toke adopted the inverted latency/bandwidth ellipse plots in netperf-wrapper, in fact), and the tools like the delayshell and cellshell developed so far are valuable and interesting, and there's at least one other tool that I forget the name of that was also way cool, it's great that they supply so much source code and so on. Also I love the omniscent concept, which gives all of us something to aim for - 0 latency, and 100% throughput.. The work seethes with long term potential. But. My biggest problem with all the work so far is that it starts with a constant baseline 150ms or 100ms RTT, and then try various algorithms over a wide range of bandwidths, and then elides that base RTT in all future plots. Unless you read carefully, you don't see that... The original bufferbloat experiment was at a physical RTT of 10ms, with bidirectional traffic flooding the queues in both directions, on an asymmetric network. I keep waiting for more experimenters to try that as a baseline experiment. Nearly all the work has been focused on reducing the induced queuing delay on RTTs that are basically bounded by the speed of light, cpu scheduling delay, and the mac acquisition time. They don't start at an arbitrary value of 100+ms! The fattest flows are usually located in nearly the same datacenter in the real world, and it's well known that short RTT flows can starve longer RTT flows. My selection of physical RTTs to experiment with are thus <100us (ethernet), 4ms (google fiber), 10ms, 18ms (FIOS), 38ms (cable)[1], and against a variety of bandwidths ranging from gigE down to 384kbit - and the hope is generally to cut the induced queuing delay down to a physical RTT + a tiny bit. In the real world we have a potential range of RTTs starting at nearly 0ms, and going up to potentially 100s of ms, a many order of magnitude difference in RTT compared to the remy work, which treats it as a nearly fixed variable. And: I (and many others) have spent a great deal of time trying to shave an additional fraction of ms off the results we get so far, at these real world RTTs and bandwidths! I do expect good things once Remy is turned loose, all 80 cpus, for 5 days - on these real world problems, but results as spectacular as they achieve, in simulation, at their really long RTTs, probably won't be as impressively "better" vs what we already achieve with running, increasingly deployed code, in real world physical, and oft varying RTTs. > Baptiste > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > [1] Much of the remy work is looking at trying to "fix" the LTE problem with e2e CC. I don't know why measured RTTs are so poor in that world, the speed of light is pretty constant everywhere. Certainly wifi has <2ms inherent physical RTT.... -- Dave Täht NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article