[Cerowrt-devel] speedtest.sh script available

Dave Taht dave.taht at gmail.com
Tue Mar 25 13:09:55 EDT 2014


thx for the new test! I like having multiple tests...

I generally don't care about single stream tcp performance much, nor
do I test as much as I should
against targets on the east coast. your netperf server is about 70ms
away, so I tried testing with target 10ms
instead of 5ms with nfq_codel, still with a download setting of 30000.
It does appear as if our recomendations
for a download speed setting, when greater than 20Mbit, are off in the
wrong direction. As for why... well,
a goodly part of the problem is that on ingress the right place for
this stuff is at the ISP... but it might
make sense to go for 120ms as the codel interval in this case...

I like that netperf only eats 15% of cpu on this test when run on cero
at this speed.

Please note that I'm testing a box that has other traffic and
instruction traps...

root at comcast-gw:~# ./speedtest.sh
Testing against netperf.richb-hanover.com while pinging gstatic.com
(60 seconds in each direction)
............................................................
 Download:  23.61 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 14.027
    10pct: 16.561
   Median: 20.061
      Avg: 20.345
    90pct: 24.466
      Max: 37.414
.............................................................
   Upload:  3.49 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 17.331
    10pct: 18.919
   Median: 22.178
      Avg: 22.814
    90pct: 27.558
      Max: 29.111

Fiddling with fq_codel target 10ms interval 120ms

config queue 'ge00'
        option interface 'ge00'
        option script 'simplest.qos'
        option linklayer 'none'
        option enabled '1'
        option download '30000'
        option upload '4400'
        option qdisc_advanced '1'
        option ingress_ecn 'ECN'
        option egress_ecn 'ECN'
        option qdisc_really_really_advanced '1'
        option iqdisc_opts 'target 10ms interval 120ms'
        option eqdisc_opts 'target 10ms'
        option qdisc 'fq_codel'

 Download:  23.58 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 17.227
    10pct: 18.304
   Median: 20.610
      Avg: 21.078
    90pct: 24.135
      Max: 32.095
.............................................................
   Upload:  3.61 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 18.416
    10pct: 20.169
   Median: 24.621
      Avg: 24.959
    90pct: 29.372
      Max: 35.057

we seem to basically peak out at this, even changing the target to
15ms interval 120ms
didn't crack 24 down. The upload number is about correct. I had put in
some fixes to htb in newer
releases of cero to make it use larger quantums at higher rates, will
have to try that...

root at comcast-gw:~# ./speedtest.sh
Testing against netperf.richb-hanover.com while pinging gstatic.com
(60 seconds in each direction)
............................................................
 Download:  23.64 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 15.789
    10pct: 16.481
   Median: 19.460
      Avg: 19.589
    90pct: 22.696
      Max: 27.628
.............................................................
   Upload:  3.69 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 15.846
    10pct: 18.558
   Median: 23.259
      Avg: 23.031
    90pct: 26.258
      Max: 30.311

Bumping it up to 35000 down had bad results:

root at comcast-gw:~# ./speedtest.sh
Testing against netperf.richb-hanover.com while pinging gstatic.com
(60 seconds in each direction)
............................................................
 Download:  27.89 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 16.535
    10pct: 37.046
   Median: 193.440
      Avg: 179.146
    90pct: 228.437
      Max: 233.570
.............................................................
   Upload:  3.27 Mbps
  Latency: (in msec, 61 pings, 0.00% packet loss)
      Min: 16.671
    10pct: 18.093
   Median: 23.476
      Avg: 22.854
    90pct: 26.328
      Max: 27.658



On Tue, Mar 25, 2014 at 9:35 AM, Dave Taht <dave.taht at gmail.com> wrote:
> I suggest renaming it to something like bloattest, to avoid copyright
> and trademark issues. If you can come up with a sexier name,
> goferit... (I have had a name in hiding for a while, "lul" - latency
> under load,
> you are welcome to it)
>
> ahh... my old friend... awk...
>
> If you change the script to use /bin/sh it works directly on cero!
>
> I have longed to have some sort of sane test server infrastructure in
> place, tied to a domain that did geographic dns, for a long time. Are
> you going to incur any costs in doing this? community driven and
> ad-supported seems like a way to go except that shell scripts don't
> have ads...
>
> I'd argue in favor of throwing out the first 25 sec of the test if you
> aren't already due to speedboost.
>
> 1) This is a box connected directly to the internet:
>
> cero2 at snapon:~/t$ ./speedtest.sh
> Testing against netperf.richb-hanover.com while pinging gstatic.com
> (60 seconds in each direction)
> .......................................................................
>  Download:  261.94 Mbps
>   Latency: (in msec, 71 pings, 0.00% packet loss)
>       Min: 0.868
>     10pct: 0.930
>    Median: 1.070
>       Avg: 1.056
>     90pct: 1.150
>       Max: 1.420
> .......................................................................
>    Upload:  333.91 Mbps
>   Latency: (in msec, 71 pings, 0.00% packet loss)
>       Min: 0.812
>     10pct: 0.900
>    Median: 1.020
>       Avg: 1.273
>     90pct: 2.040
>       Max: 2.770
>
> Yes, I'm less than a ms away from gstatic.
>
> cero2 at snapon:~/t$ ping gstatic.com
> PING gstatic.com (74.125.239.143) 56(84) bytes of data.
> 64 bytes from nuq05s02-in-f15.1e100.net (74.125.239.143): icmp_req=1
> ttl=59 time=0.924 ms
> 64 bytes from nuq05s02-in-f15.1e100.net (74.125.239.143): icmp_req=2
> ttl=59 time=0.915 ms
>
> 2) Run from my nuc
>
> d at nuc:~/t$ ./speedtest.sh
> Testing against netperf.richb-hanover.com while pinging gstatic.com
> (60 seconds in each direction)
> .......................................................................
>  Download:  21.6 Mbps
>   Latency: (in msec, 71 pings, 0.00% packet loss)
>       Min: 16.100
>     10pct: 16.600
>    Median: 19.700
>       Avg: 19.966
>     90pct: 23.900
>       Max: 27.600
> .......................................................................
>    Upload:  3.55 Mbps
>   Latency: (in msec, 71 pings, 0.00% packet loss)
>       Min: 15.900
>     10pct: 17.500
>    Median: 22.800
>       Avg: 22.821
>     90pct: 26.400
>       Max: 32.000
>
> I then tried the armory's comcast connection...
>
> 3) Run from cero
>
> root at dave-gw:~# ./speedtest.sh
> Testing against netperf.richb-hanover.com while pinging gstatic.com
> (60 seconds in each direction)
> .............................................................
>  Download:  19.48 Mbps
>   Latency: (in msec, 61 pings, 0.00% packet loss)
>       Min: 16.782
>     10pct: 17.921
>    Median: 20.288
>       Avg: 21.067
>     90pct: 25.415
>       Max: 38.299
> .............................................................
>    Upload:  3.92 Mbps++
>   Latency: (in msec, 61 pings, 0.00% packet loss)
>       Min: 18.355
>     10pct: 20.160
>    Median: 24.675
>       Avg: 24.385
>     90pct: 27.600
>       Max: 30.972
>
> So we still have some variance in calculating up/download speeds...
>
> 4) Run from cero with sqm off:
>
> root at comcast-gw:~# ./speedtest.sh
> Testing against netperf.richb-hanover.com while pinging gstatic.com
> (60 seconds in each direction)
> .............................................................
>  Download:  28.1 Mbps
>
> +the sqm system is set to 30000 down
>
>   Latency: (in msec, 61 pings, 0.00% packet loss)
>       Min: 17.246
>     10pct: 151.227
>    Median: 219.129
>       Avg: 197.213
>     90pct: 230.134
>       Max: 237.868
> ..............................................................
>    Upload:  4.55 Mbps
>   Latency: (in msec, 62 pings, 0.00% packet loss)
>       Min: 16.744
>     10pct: 384.688
>    Median: 586.156
>       Avg: 579.208
>     90pct: 755.223
>       Max: 872.024
>
> 5) for comparison, a rrul test is at
> http://snapon.lab.bufferbloat.net/~d/richb-hannover/richb-compare.svg
>
> So not doing 4 up and down at the same time gives you latency results
> at max that are quite a bit
> larger than a single flow (which makes sense for tcp without
> congestion avoidance working)
>
> and microscopic upload performance compared to what the upload along claims.
>
> sqm settings
>
> config queue 'ge00'
>     option interface 'ge00'
>     option script 'simplest.qos'
>     option linklayer 'none'
>     option enabled '1'
>     option download '30000'
>     option upload '4400'
>     option qdisc_advanced '1'
>     option ingress_ecn 'ECN'
>     option egress_ecn 'ECN'
>     option qdisc_really_really_advanced '1'
>     option iqdisc_opts 'target 5ms'
>     option eqdisc_opts 'target 5ms'
>     option qdisc 'nfq_codel'
>
>
>
>
> On Tue, Mar 25, 2014 at 8:16 AM, Rich Brown <richb.hanover at gmail.com> wrote:
>> I have created a 'speedtest.sh' shell script that simulates the http://speedtest.net, but does it one better.
>>
>> The default options for the script do a separate TCP_MAERTS and TCP_STREAM for 60 seconds while collecting ping latency. The output of the script shows the down/upload speed as well as a summary of the ping latency, including min, max, average, median, and 10th and 90th percentiles.
>>
>> The script makes it easier to optimize my settings because it makes the latency figures more concrete. (I used to eyeball the ping output, saying, "Hmmm. I think there were fewer outliers than before...")
>>
>> You can see the script on the "Quick Test for Bufferbloat" page on the wiki at:
>>
>> http://www.bufferbloat.net/projects/cerowrt/wiki/Quick_Test_for_Bufferbloat#Speedtestsh-shell-script
>>
>> Enjoy!
>>
>> Rich
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html



More information about the Cerowrt-devel mailing list