[Cerowrt-devel] smoketest BQL-40 is out

Sebastian Moeller moeller0 at gmx.de
Thu Feb 23 21:12:54 EST 2012


Hi Dave, 

I managed to squeeze that test in, hope it helps.



On Feb 23, 2012, at 2:19 PM, Dave Taht wrote:

> your min time is higher than I'd expected, but certainly the mean and
> max and stddev have improved.
> 
> Could you send the output of
> 
> tc -s qdisc show dev ge00


Here the BEFORE test run output:
root at nacktmulle:~# tc -s qdisc show dev ge00
qdisc hfsc 1: root refcnt 2 default 30 
 Sent 42751655 bytes 128031 pkt (dropped 1635, overlimits 244784 requeues 2) 
 backlog 0b 0p requeues 2 
qdisc sfq 100: parent 1:10 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 0
 forced_mark 0 forced_mark_head 0 forced_drop 0
 Sent 10785394 bytes 96191 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc sfq 200: parent 1:20 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 0
 forced_mark 0 forced_mark_head 0 forced_drop 0
 Sent 72822 bytes 370 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc sfq 300: parent 1:30 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 972
 forced_mark 0 forced_mark_head 0 forced_drop 3
 Sent 31856604 bytes 31239 pkt (dropped 2438, overlimits 975 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc sfq 400: parent 1:40 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 0
 forced_mark 0 forced_mark_head 0 forced_drop 0
 Sent 36835 bytes 231 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 

And here AFTER the test run:
root at nacktmulle:~# tc -s qdisc show dev ge00
qdisc hfsc 1: root refcnt 2 default 30 
 Sent 51812429 bytes 170042 pkt (dropped 4040, overlimits 325311 requeues 2) 
 backlog 0b 0p requeues 2 
qdisc sfq 100: parent 1:10 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 0
 forced_mark 0 forced_mark_head 0 forced_drop 0
 Sent 14805550 bytes 130775 pkt (dropped 24, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc sfq 200: parent 1:20 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 0
 forced_mark 0 forced_mark_head 0 forced_drop 0
 Sent 74783 bytes 387 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc sfq 300: parent 1:30 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 1067
 forced_mark 0 forced_mark_head 0 forced_drop 40
 Sent 36890173 bytes 38621 pkt (dropped 5546, overlimits 1107 requeues 0) 
 backlog 0b 0p requeues 0 
qdisc sfq 400: parent 1:40 limit 60p quantum 1514b depth 24 headdrop divisor 1024 
 ewma 2 min 3174b max 9522b probability 0.12 ecn 
 prob_mark 0 prob_mark_head 0 prob_drop 0
 forced_mark 0 forced_mark_head 0 forced_drop 0
 Sent 41923 bytes 259 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 


> 
> after a test run?

Here is the ping probe with AQM at 97% of line rates (line up 512 k)
--- netblock-75-79-143-1.dslextreme.com ping statistics ---
100 packets transmitted, 70 packets received, 30.0% packet loss
round-trip min/avg/max/stddev = 10.246/92.621/203.575/39.855 ms

So it looks the minimum and STD are quite variable between test runs...


Best Regards
	Sebastian

> 
>>        Yes, that did the trick! Now AQM actually is even better than QoS for my worst case scenario, saturating uplink, starting one downlink elephant and open 93 media heavy tabs in the browser.
>> I now get:
>> 100 packets transmitted, 73 packets received, 27.0% packet loss
>> round-trip min/avg/max/stddev = 33.941/108.083/177.825/22.795 ms
>> 
>> while using QoS I got:
>> 100 packets transmitted, 75 packets received, 25.0% packet loss
>> round-trip min/avg/max/stddev = 18.111/181.506/325.416/45.006 ms
>> 
>> AQM is even better behaved than QoS now. That is pretty cool.
>> 
>> BTW the web AQM interface manged to set the enabled line in /etc/config/aqm just fine it just did not seem to manage to do the equivalent of /etc/init.d/aqm start.
>> 
>> Cool stuff.
>> 
>> Best
>>        Sebastian
>> 
>> 
>> 
>>> 
>>>>> 
>>>>> This will not be the final url for the 3.3 development
>>>>> series, but
>>>>> 
>>>>> http://huchra.bufferbloat.net/~cero1/3.3/
>>>>> 
>>>>> has that.
>>>>> 
>>>>> I made a small dent in the qos/aqm problem there,
>>>>> (my intent was to basically treat the above as 'bql-41' -
>>>>> I wanted to treat 3.3 issues and bql issues separately,
>>>>> but lack time to do both, and 3.3 is going swimmingly, so...)
>>>>> 
>>>>> but I'm puzzled as to what you are seeing below.
>>>>> 
>>>>> 0) Did you enable the aqm script? If you merely run it
>>>>> without enabling it, nothing happens. Similarly, the qos script
>>>>> needs to be disabled.
>>>> 
>>>>        Sorry for not being clear, I always had only the module under test enabled, the other one was always disabled. One thing I noticed, when enabling QoS I get the "Applying changes" info widget and it will state "/etc/config/qos" while enabling AQM will not show any "/etc/config" info, but that might be purely cosmetic. Will investigate, as it looks somehow AQM is never really enabled.
>>>>        Question do I need to install any package under the system->software tab for AQM to fully work? I assume not.
>>> 
>>> No, aside from the web interface being as yet untested...
>>> 
>>>> 
>>>>> 
>>>>> 1) Neither the AQM nor qos script does ADSL overhead right,
>>>>> and I'd got puzzled on what was 'right' after fiddling with it.
>>>> 
>>>>        Well, I agree that the overhead calculation as implemented in generate.sh is quite dubious (it looks like a stochastic approach).
>>> 
>>> Completely dubious and probably dates back to before the stab work.
>>> 
>>>> So I used the fact that cewrowrt comes with quite recent tc and used tc's "stab" option as that seems theoretically solve both the ATM carrier packet quantization as well as the per packet overhead issues quite well.
>>> 
>>> Yes.
>>>> 
>>>>> 
>>>>> 2) both qos and aqm calculated the sfq 'limit' variable wrong, and
>>>>> neither did ipv6 work right at all.
>>>> 
>>>>        I still have no IPv6 available so I did not test that. And even with the limit error QoS still works okay. (And I quite sure so will AQM once I manage to actually activate it :) )
>>> 
>>> Well, my concern is mostly that bittorrent will misbehave worse under
>>> this system than otherwise.
>>> 
>>> Qos-scripts uses hfsc + sfq for two bins, and hfsc + red for two other
>>> bins. Both sfq and red have improvements that qos-scripts should now
>>> be automatically picking up - red was entirely broken before kernel
>>> 3.2, actually.
>>> 
>>> AQM-scripts - at least this cut at it - will use hfsc + sfqred for all
>>> bins, and saner limits for both sfq and the red component of sfqred
>>> which will hopefully
>>> 
>>> I have to admit that the simplest possible implementation of the new
>>> stuff in debloat is performing pretty well - which is merely htb + 1
>>> bin of sfqred.
>>> 
>>> So if you pull down the latest debloat from the deBloat repo, and just
>>> run a command line of
>>> 
>>> IFACE=ge00 QMODEL=htb_sfq_red UPLINK=whatever_in_kbits /usr/sbin/debloat
>>> 
>>> the results should be pretty good.
>>> 
>>> I'm thinking a two tier model (Best effort and background) will be
>>> pretty good, 3 tiers and
>>> I can emulate most of pfifo_fast's behavior, but I don't see a need
>>> for the 4 tier model in qos-scripts, but that's why we test...
>>> 
>>>> 
>>>>> 
>>>>> I believe, but don't remember, that at least those fixes got made in
>>>>> the 3.3rc4-1 code. Actually there's a better fix for the ipv6 issue
>>>>> than what's there…
>>>> 
>>>>        Ah, so I will keep changing my firmware :)
>>> 
>>> Like I said, it's mostly just scripts now.... no huge need for that.
>>> 
>>>>> 
>>>>> 3) I am not intending to stick with the aqm script as structured now,
>>>>> but to finish modifying the 'debloat' script to have all the features required.
>>>>> 
>>>>> that one has a bunch of semi-working queue models in it.
>>>>> 
>>>>> If yer gonna hack scripts, please take a look at that one.
>>>>> There are plenty of things it needs and the qos and aqm scripts
>>>>> are more feature complete, tho…
>>>> 
>>>>        OH, thanks for the pointer, I have not looked at debloat recently, but I am somewhat confused by the generate.sh and trulls.awk combination, so I hope that debloat is more accessible for non-programmers...
>>> 
>>> There is some really good stuff in the current combo, but I too found
>>> the amalagation of shell and awk kind of difficult. Wanted one
>>> language, with floating point, which was why I ended up
>>> going for lua, which has it's own problems too, but...
>>>> 
>>>> 
>>>>> 
>>>>> 4) re updates: I have to warn against doing sysupgrade "and keep files
>>>>> in place", as the based filesystem does change, and I'm not making
>>>>> huge attempts at dealing with that. so doing a backup
>>>>> then a sysupdate -n (-n meaning don't preserve files over the backup)
>>>>> is generally a good idea.
>>>> 
>>>>        Thanks for the advise, I actually reflashed the factory firmware for bql-40 and called it update, sorry for being unclear. Actually the installation flashing instructions (http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_flashing_instructions) were quite specific about which method one should use.
>>> 
>>> Yep. Large red letters and everything.
>>>> 
>>>> 
>>>>> 
>>>>> anyway:
>>>>> 
>>>>> commit be09b8c15b6dc6bf4cb7da3112c598138a9c77ef
>>>>> Author: Dave Taht <dave.taht at bufferbloat.net>
>>>>> Date:   Tue Feb 14 14:38:40 2012 +0000
>>>>> 
>>>>>    SFQ is limited in packets. RED is in bytes. tcrules.awk conflated these
>>>>> 
>>>>>    The original openwrt shaper took the RED byte calculation...
>>>>>    and reused it to specify a limit for sfq. However, sfq uses
>>>>>    packets rather than bytes, so it was specifying, say 16000 bytes
>>>>>    and translating that to 16000 packets.
>>>>> 
>>>>>    This was not an error I prior to 3.3, because SFQ had a
>>>>>    hard coded limit of 127 packets. It is now.
>>>>> 
>>>>>    So this commit puts a lower and upper bound on the maximum packets
>>>>>    that is sane, but is not pre 3.2 compatible.
>>>>> 
>>>>> commit 44f8febbd34686564516c3261a911bd7cffcf714
>>>> 
>>>>        Thanks a lot for the great project.
>>> 
>>> Getting there!
>>> 
>>>> Best
>>>>        Sebastian
>>>> 
>>>> 
>>>>> 
>>>>> 
>>>>> On Thu, Feb 23, 2012 at 6:21 AM, Sebastian Moeller <moeller0 at gmx.de> wrote:
>>>>>> Hi Dave,
>>>>>> 
>>>>>> I finally got around to update from rc6 to bql-40, to try the ATM shaping. The update was a breeze, great work!
>>>>>>        I am sad to report that for my ATM link AQM does not work for me as well as QoS does. My measurement consists out of using ping to get the RTT to the first ISP hop (as taken from trace route) while concurrently saturating the up link with a dropbox upload (which I usually give that a headstart of 10 seconds to go into bandwidth ceiling): AQM gives the same bad avg RTT of 1.2 seconds as no shaping at all does, while QoS gives me an avg RTT of around 24ms (best case RTT is around 13ms on my link, so the link stays pretty useable).
>>>>>>        I tried to apply the same changes to /usr/lib/aqm/generate.sh and /usr/lib/qos/generate.sh to make the them better understand the peculiarities of my ATM adsl1 connection, but it seems I did something wrong for the AQM script since my change does not have any effect there… (both modified files attached). I usually only  shape up- and down-stream to 97% of the line rate, which works ok with QoS. (And all my tests have been done, very unscientifically, using my mac laptop over the 5GHz wireless band of the router… )
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>        While this is not too helpful, it might give some hints for bql-42, as from the roadmap I take it you will tackle the dsl issue there. I am just about to move and switch from DSL to cable internet, so unfortunately this might be my last test…
>>>>>> 
>>>>>> (BTW, I have been playing with the -s option of ping to change the payload size and for my measly 3008kbps down, 512kbps up connection I can actually see the 48 byte ATM package boundaries in avg RTTs (-c 100), for each new ATM package I roughly get 1ms added to the RTT (as expected when doing the math the my line rate). So I think it should be possible to figure out whether a link uses ATM as carrier or not (IIRC newer ADSL systems like AT&T's verse HSI use ADSL2 over PTM-TC instead of ATM so touch connections still have per package overhead to account for but lack the weird ATM repack issues).
>>>>>>        I also have a hunch that using this method it should be possible to deduct a link's overhead (as taken understood by tc's step option) from a properly prepared ping sweep. In other words my hypothesis is that it should be possible to run a script on a non-shaped idle link and figure out the optimal parameters for stab. But I digress… (And alas, in two days my DSL connection will be gone and I can not even test my hypothesis in any meaningful way until then...))
>>>>>> 
>>>>>> 
>>>>>> best
>>>>>>        Sebastian
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Feb 14, 2012, at 11:53 AM, Dave Taht wrote:
>>>>>> 
>>>>>>> http://huchra.bufferbloat.net/~cero1/bql-smoketests/bql-40/
>>>>>>> 
>>>>>>> changes in this release:
>>>>>>> 
>>>>>>> kernel 3.3-rc3
>>>>>>> bind 9.9rc2
>>>>>>> ntpd + dnssec removed (too buggy)
>>>>>>> snmpd installed by default
>>>>>>> fprobe installed by default
>>>>>>> avahi installed by default
>>>>>>> 
>>>>>>> sort of better working 'aqm' shaper installed
>>>>>>> ** when configured uses hfsc + sfqred
>>>>>>> ** still has trouble with ipv6, diffserv, and tcp elephants
>>>>>>> ** no adsl overhead support
>>>>>>> 
>>>>>>> I will be travelling later this week. What I'm mostly
>>>>>>> working on right now is better ipv6 support.
>>>>>>> 
>>>>>>> --
>>>>>>> Dave Täht
>>>>>>> SKYPE: davetaht
>>>>>>> US Tel: 1-239-829-5608
>>>>>>> FR Tel: 0638645374
>>>>>>> http://www.bufferbloat.net
>>>>>>> _______________________________________________
>>>>>>> Cerowrt-devel mailing list
>>>>>>> Cerowrt-devel at lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Dave Täht
>>>>> SKYPE: davetaht
>>>>> US Tel: 1-239-829-5608
>>>>> http://www.bufferbloat.net
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Dave Täht
>>> SKYPE: davetaht
>>> US Tel: 1-239-829-5608
>>> http://www.bufferbloat.net
>> 
> 
> 
> 
> -- 
> Dave Täht
> SKYPE: davetaht
> US Tel: 1-239-829-5608
> http://www.bufferbloat.net




More information about the Cerowrt-devel mailing list