<div dir="ltr">Possibly a better simulation environment than netem would be ns-3's NSC (network simulation cradle), which lets you connect up multiple VMs over an emulated network in userspace... obviously, you better have a multicore system with plenty of resources available, but it works very nicely and needs no physical network at all. ns-3 virtual network nodes also speak real protocols, so you can talk to them with real tools as well (netcat to a ns-3 virtual node, for example, or ping them). I suppose it would be possible also to bridge one of the TAP devices ns-3 is talking on with a real interface.</div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jul 9, 2013 at 4:32 PM, Dave Taht <span dir="ltr"><<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
this really, really, really is the wrong list for this dialog. cc-ing codel<br>
<br>
On Mon, Jul 8, 2013 at 11:04 PM, Mikael Abrahamsson <<a href="mailto:swmike@swm.pp.se">swmike@swm.pp.se</a>> wrote:<br>
> On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:<br>
><br>
>> Did a few test runs on my setup. Here are some figures (can't go higher<br>
>> than 100mbit with the hardware I have, sorry).<br>
><br>
><br>
> Thanks, much appreciated!<br>
><br>
><br>
>> Note that I haven't done tests at 100mbit on this setup before, so can't<br>
>> say whether something weird is going on there. I'm a little bit puzzled<br>
>> as to why the flows don't seem to get going at all in one direction for<br>
>> the rrul test. I'm guessing it has something to do with TSQ.<br>
><br>
><br>
> For me, it shows that FQ_CODEL indeed affects TCP performance negatively for<br>
> long links, however it looks like the impact is only about 20-30%.<br>
<br>
I would be extremely reluctant to draw any conclusions from any test<br>
derived from netem's results at this point. (netem is a qdisc that can<br>
insert delay and loss into a stream) I did a lot of netem testing in<br>
the beginning of the bufferbloat effort and the results differed so<br>
much from what I'd got in the "real world" that I gave up and stuck<br>
with the real world for most of the past couple years. There were in<br>
particular, major problems with combining netem with any other<br>
qdisc...<br>
<br>
<a href="https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel" target="_blank">https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel</a><br>
<br>
One of the simplest problems with netem is that by default it delays<br>
all packets, including things like arp and nd, which are kind of<br>
needed in ethernet...<br>
<br>
That said, now that more problems are understood, toke and I, and<br>
maybe matt mathis are trying to take it on...<br>
<br>
The simulated results with ns2 codel were very good in the range<br>
2-300ms, but that's not the version of codel in linux. It worked well<br>
up to about 1sec, actually, but fell off afterwards. I have a set of<br>
more ns2-like patches for the ns2 model in cerowrt and as part of my<br>
3.10 builds that I should release as a deb soon.<br>
<br>
Recently a few major bugs in htb have come to light and been fixed in<br>
the 3.10 series.<br>
<br>
There have also been so many changes to the TCP stack that I'd<br>
distrust comparing tcp results between any given kernel version. The<br>
TSQ addition is not well understood, and I think, but am not sure,<br>
it's both too big for low bandwidths and not big enough for larger<br>
ones...<br>
<br>
and... unlike in the past where tcp was being optimized for<br>
supercomputer center to supercomputer center, the vast majority of tcp<br>
related work is now coming out of google, who are optimizing for short<br>
transfers over short rtts.<br>
<br>
It would be nice to have access to internet2 for more real world testing.<br>
<br>
><br>
> What's stranger is that latency only goes up to around 230ms from its 200ms<br>
> "floor" with FIFO, I had expected a bigger increase in buffering with FIFO.<br>
<br>
TSQ, here, probably.<br>
<br>
> Have you done any TCP tuning?<br>
<br>
Not recently, aside from turning up tsq to higher defaults and lower<br>
defaults without definitive results.<br>
<br>
> Would it be easy for you to do tests with the streams that "loads up the<br>
> link" being 200ms RTT, and the realtime flows only having 30-40ms RTT,<br>
> simulating downloads from a high RTT server and doing interactive things to<br>
> a more local web server.<br>
<br>
It would be a useful workload. Higher on my list is emulating<br>
cablelab's latest tests, which is about the same thing only closer<br>
statistically to what a real web page might look like - except<br>
cablelabs tests don't have the redirects or dns lookups most web pages<br>
do.<br>
<br>
<br>
><br>
<span class="HOEnZb"><font color="#888888">><br>
> --<br>
> Mikael Abrahamsson email: <a href="mailto:swmike@swm.pp.se">swmike@swm.pp.se</a><br>
><br>
> _______________________________________________<br>
> Cerowrt-devel mailing list<br>
> <a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br>
> <a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel" target="_blank">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br>
><br>
<br>
<br>
<br>
--<br>
Dave Täht<br>
<br>
Fixing bufferbloat with cerowrt: <a href="http://www.teklibre.com/cerowrt/subscribe.html" target="_blank">http://www.teklibre.com/cerowrt/subscribe.html</a><br>
_______________________________________________<br>
Codel mailing list<br>
<a href="mailto:Codel@lists.bufferbloat.net">Codel@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/codel" target="_blank">https://lists.bufferbloat.net/listinfo/codel</a><br>
</font></span></blockquote></div><br></div>