[Cerowrt-devel] happy 4th!

Dave Taht dave.taht at gmail.com
Mon Jul 8 23:24:57 EDT 2013


On Mon, Jul 8, 2013 at 10:03 AM, Toke Høiland-Jørgensen <toke at toke.dk> wrote:
> Mikael Abrahamsson <swmike at swm.pp.se> writes:
>
>> I have not so far seen tests with FQ_CODEL with a simulated 100ms
>> extra latency one-way (200ms RTT). They might be out there, but I have
>> not seen them. I encourage these tests to be done.
>
> Did a few test runs on my setup. Here are some figures (can't go higher
> than 100mbit with the hardware I have, sorry).
>
> Note that I haven't done tests at 100mbit on this setup before, so can't
> say whether something weird is going on there.

It looks to me as though one direction on the path is running at
10Mbit, and the other at 100Mbit. So I think you typoed an ethtool or
netem line....

Incidentally, I'd like to know if accidental results like that are
repeatable. I'm not a big fan of asymmetric links in the first place
(6x1 being about the worst I ever thought semi-sane), and if behavior
like this:

http://archive.tohojo.dk/bufferbloat-data/long-rtt/rrul-100mbit-pfifo_fast.png

and particularly this:

http://archive.tohojo.dk/bufferbloat-data/long-rtt/tcp_bidirectional-100mbit-pfifo_fast.png

holds up over these longer (200ms) RTT links, you are onto something.


> I'm a little bit puzzled
> as to why the flows don't seem to get going at all in one direction for
> the rrul test.

At high levels of utilization, it is certainly possible to so saturate
the queues that other flows cannot start at all...

>I'm guessing it has something to do with TSQ.

Don't think so. I have incidentally been tuning that way up so as to
get pre-linux 3.6 behavior on several tests. On the other hand the
advent of TSQ makes Linux hosts almost have a pure pull through stack.
If UDP had the same behavior we could almost get rid of the txqueue
entirely (on hosts) and apply fq and codel techniques directly to the
highest levels of the kernel stack.

TSQ might be more effective if it was capped at (current BQL limit *
2)/(number of flows active)... this would start reducing the amount of
data that floods the tso/gso offloads at higher numbers of streams.

> Attaching graphs makes the listserv bounce my mail, so instead they're
> here: http://archive.tohojo.dk/bufferbloat-data/long-rtt/ with
> throughput data below. Overall, it looks pretty good for fq_codel I'd
> say :)

One of your results for fq_codel is impossible, as you get 11Mbit of
throughput out of a 10Mbit link.

>
> I can put up the data files as well if you'd like.
>
> -Toke
>
>
> Throughput data:
>
>
> 10mbit:
>
> rrul test (4 flows each way), pfifo_fast qdisc:
>  TCP download sum:
>   Data points: 299
>   Total:       375.443728 Mbits
>   Mean:        6.278323 Mbits/s
>   Median:      6.175466 Mbits/s
>   Min:         0.120000 Mbits/s
>   Max:         9.436373 Mbits/s
>   Std dev:     1.149514
>   Variance:    1.321382
> --
>  TCP upload sum:
>   Data points: 300
>   Total:       401.740454 Mbits
>   Mean:        6.695674 Mbits/s
>   Median:      6.637576 Mbits/s
>   Min:         2.122827 Mbits/s
>   Max:         16.892302 Mbits/s
>   Std dev:     1.758319
>   Variance:    3.091687
>
>
> rrul test (4 flows each way), fq_codel qdisc:
>  TCP download sum:
>   Data points: 301
>   Total:       492.824346 Mbits
>   Mean:        8.186451 Mbits/s
>   Median:      8.416901 Mbits/s
>   Min:         0.120000 Mbits/s
>   Max:         9.965051 Mbits/s
>   Std dev:     1.244959
>   Variance:    1.549924
> --
>  TCP upload sum:
>   Data points: 305
>   Total:       717.499994 Mbits
>   Mean:        11.762295 Mbits/s
>   Median:      8.630924 Mbits/s
>   Min:         2.513799 Mbits/s
>   Max:         323.180000 Mbits/s
>   Std dev:     31.056047
>   Variance:    964.478066
>
>
> TCP test (one flow each way), pfifo_fast qdisc:
>  TCP download:
>   Data points: 301
>   Total:       263.445418 Mbits
>   Mean:        4.376170 Mbits/s
>   Median:      4.797729 Mbits/s
>   Min:         0.030000 Mbits/s
>   Max:         5.757982 Mbits/s
>   Std dev:     1.135209
>   Variance:    1.288699
> ---
>  TCP upload:
>   Data points: 302
>   Total:       321.090853 Mbits
>   Mean:        5.316074 Mbits/s
>   Median:      5.090142 Mbits/s
>   Min:         0.641123 Mbits/s
>   Max:         24.390000 Mbits/s
>   Std dev:     2.126472
>   Variance:    4.521882
>
>
> TCP test (one flow each way), fq_codel qdisc:
>  TCP download:
>   Data points: 302
>   Total:       365.357123 Mbits
>   Mean:        6.048959 Mbits/s
>   Median:      6.550488 Mbits/s
>   Min:         0.030000 Mbits/s
>   Max:         9.090000 Mbits/s
>   Std dev:     1.316275
>   Variance:    1.732579
> ---
>  TCP upload:
>   Data points: 303
>   Total:       466.550695 Mbits
>   Mean:        7.698856 Mbits/s
>   Median:      6.144435 Mbits/s
>   Min:         0.641154 Mbits/s
>   Max:         127.690000 Mbits/s
>   Std dev:     12.075298
>   Variance:    145.812812
>
>
> 100 mbit:
>
> rrul test (4 flows each way), pfifo_fast qdisc:
>  TCP download sum:
>   Data points: 301
>   Total:       291.718140 Mbits
>   Mean:        4.845816 Mbits/s
>   Median:      4.695355 Mbits/s
>   Min:         0.120000 Mbits/s
>   Max:         10.774475 Mbits/s
>   Std dev:     1.818852
>   Variance:    3.308222
> --
>  TCP upload sum:
>   Data points: 305
>   Total:       5468.339961 Mbits
>   Mean:        89.644917 Mbits/s
>   Median:      90.731214 Mbits/s
>   Min:         2.600000 Mbits/s
>   Max:         186.362429 Mbits/s
>   Std dev:     21.782436
>   Variance:    474.474532
>
>
> rrul test (4 flows each way), fq_codel qdisc:
>  TCP download sum:
>   Data points: 304
>   Total:       427.064699 Mbits
>   Mean:        7.024090 Mbits/s
>   Median:      7.074768 Mbits/s
>   Min:         0.150000 Mbits/s
>   Max:         17.870000 Mbits/s
>   Std dev:     2.079303
>   Variance:    4.323501
> --
>  TCP upload sum:
>   Data points: 305
>   Total:       5036.774674 Mbits
>   Mean:        82.570077 Mbits/s
>   Median:      82.782532 Mbits/s
>   Min:         2.600000 Mbits/s
>   Max:         243.990000 Mbits/s
>   Std dev:     22.566052
>   Variance:    509.226709
>
>
> TCP test (one flow each way), pfifo_fast qdisc:
>  TCP download:
>   Data points: 160
>   Total:       38.477172 Mbits
>   Mean:        1.202412 Mbits/s
>   Median:      1.205256 Mbits/s
>   Min:         0.020000 Mbits/s
>   Max:         4.012585 Mbits/s
>   Std dev:     0.728299
>   Variance:    0.530419
>  TCP upload:
>   Data points: 165
>   Total:       2595.453489 Mbits
>   Mean:        78.650106 Mbits/s
>   Median:      92.387832 Mbits/s
>   Min:         0.650000 Mbits/s
>   Max:         102.610000 Mbits/s
>   Std dev:     30.432215
>   Variance:    926.119728
>
>
>
> TCP test (one flow each way), fq_codel qdisc:
>   Data points: 301
>   Total:       396.307606 Mbits
>   Mean:        6.583183 Mbits/s
>   Median:      7.786816 Mbits/s
>   Min:         0.030000 Mbits/s
>   Max:         15.270000 Mbits/s
>   Std dev:     3.034477
>   Variance:    9.208053
>  TCP upload:
>   Data points: 302
>   Total:       4238.768131 Mbits
>   Mean:        70.178280 Mbits/s
>   Median:      74.722554 Mbits/s
>   Min:         0.650000 Mbits/s
>   Max:         91.901862 Mbits/s
>   Std dev:     17.860375
>   Variance:    318.993001
>
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



More information about the Cerowrt-devel mailing list