[Cerowrt-devel] happy 4th!

Dave Taht dave.taht at gmail.com
Tue Jul 9 02:32:03 EDT 2013


this really, really, really is the wrong list for this dialog. cc-ing codel

On Mon, Jul 8, 2013 at 11:04 PM, Mikael Abrahamsson <swmike at swm.pp.se> wrote:
> On Mon, 8 Jul 2013, Toke Høiland-Jørgensen wrote:
>
>> Did a few test runs on my setup. Here are some figures (can't go higher
>> than 100mbit with the hardware I have, sorry).
>
>
> Thanks, much appreciated!
>
>
>> Note that I haven't done tests at 100mbit on this setup before, so can't
>> say whether something weird is going on there. I'm a little bit puzzled
>> as to why the flows don't seem to get going at all in one direction for
>> the rrul test. I'm guessing it has something to do with TSQ.
>
>
> For me, it shows that FQ_CODEL indeed affects TCP performance negatively for
> long links, however it looks like the impact is only about 20-30%.

I would be extremely reluctant to draw any conclusions from any test
derived from netem's results at this point. (netem is a qdisc that can
insert delay and loss into a stream) I did a lot of netem testing in
the beginning of the bufferbloat effort and the results differed so
much from what I'd got in the "real world" that I gave up and stuck
with the real world for most of the past couple years. There were in
particular, major problems with combining netem with any other
qdisc...

https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel

One of the simplest problems with netem is that by default it delays
all packets, including things like arp and nd, which are kind of
needed in ethernet...

That said, now that more problems are understood, toke and I, and
maybe matt mathis are trying to take it on...

The simulated results with ns2 codel were very good in the range
2-300ms, but that's not the version of codel in linux. It worked well
up to about 1sec, actually, but fell off afterwards. I have a set of
more ns2-like patches for the ns2 model in cerowrt and as part of my
3.10 builds that I should release as a deb soon.

Recently a few major bugs in htb have come to light and been fixed in
the 3.10 series.

There have also been so many changes to the TCP stack that I'd
distrust comparing tcp results between any given kernel version. The
TSQ addition is not well understood, and I think, but am not sure,
it's both too big for low bandwidths and not big enough for larger
ones...

and... unlike in the past where tcp was being optimized for
supercomputer center to supercomputer center, the vast majority of tcp
related work is now coming out of google, who are optimizing for short
transfers over short rtts.

It would be nice to have access to internet2 for more real world testing.

>
> What's stranger is that latency only goes up to around 230ms from its 200ms
> "floor" with FIFO, I had expected a bigger increase in buffering with FIFO.

TSQ, here, probably.

> Have you done any TCP tuning?

Not recently, aside from turning up tsq to higher defaults and lower
defaults without definitive results.

> Would it be easy for you to do tests with the streams that "loads up the
> link" being 200ms RTT, and the realtime flows only having 30-40ms RTT,
> simulating downloads from a high RTT server and doing interactive things to
> a more local web server.

It would be a useful workload. Higher on my list is emulating
cablelab's latest tests, which is about the same thing only closer
statistically to what a real web page might look like - except
cablelabs tests don't have the redirects or dns lookups most web pages
do.


>
>
> --
> Mikael Abrahamsson    email: swmike at swm.pp.se
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



More information about the Cerowrt-devel mailing list