<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Correction.<div><br></div><div>Using 3.10.9-2</div><div><br></div><div><br><div><div>On 25 Aug 2013, at 15:26, Fred Stratton <<a href="mailto:fredstratton@imap.cc">fredstratton@imap.cc</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="Content-Type" content="text/html charset=windows-1252"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Thank you.<div><br></div><div>This is an initial response.</div><div><br></div><div>Am using 3.10.2-1 currently, with the standard AQM interface. This does not have the pull down menu of your interface, which is why I ask if both are active. <br><div><div>On 25 Aug 2013, at 14:59, Sebastian Moeller <<a href="mailto:moeller0@gmx.de">moeller0@gmx.de</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">Hi Fred,<br><br><br>On Aug 25, 2013, at 12:17 , Fred Stratton <<a href="mailto:fredstratton@imap.cc">fredstratton@imap.cc</a>> wrote:<br><br><blockquote type="cite"><br>On 25 Aug 2013, at 10:21, Fred Stratton <<a href="mailto:fredstratton@imap.cc">fredstratton@imap.cc</a>> wrote:<br><br><blockquote type="cite">As the person with the most flaky ADSL link, I point out that None of these recent, welcome, changes, are having any effect here, with an uplink sped of circa 950 kbits/s.<br></blockquote></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Okay, how flaky is you link? What rate of Errors do you have while testing? I am especially interested in CRC errors and ES SES and HEC, just to get an idea how flaky the line is...<br><br><blockquote type="cite"><blockquote type="cite"><br>The reason I mention this is that it is still impossible to watch iPlayer Flash streaming video and download at the same time, The iPlayer stream fails. The point of the exercise was to achieve this. <br><br>The uplink delay is consistently around 650ms, which appears to be too high for effective streaming. In addition, the uplink stream has multiple breaks, presumably outages, if the uplink rate is capped at, say, 700 kbits/s.<br></blockquote></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Well, watching video is going to stress your downlink so the uplink should not saturate by the ACKs and the concurrent downloads also do not stress your uplink except for the ACKs, so this points to downlink errors as far as I can tell from the data you have given. If the up link has repeated outages however, your problems might be unfixable because these, if long enough, will cause lost ACKs and will probably trigger retransmission, independent of whether the link layer adjustments work or not. (You could test this by shaping you up and downlink to <= 50% of the link rates and disable all link layer adjustments, 50% is larger than the ATM worst case so should have you covered. Well unless you del link has an excessive number of tones reserved for forward error correction (FEC)).<br></blockquote><div><br></div>Uptime 100655</div><div>downstream 12162 kbits/s</div><div>CRC errors 10154</div><div>FEC Errors 464</div><div>hEC Errors 758</div><div><br></div><div>upstream 1122 kbits/s</div><div>no errors in period.</div><div><br></div><div><blockquote type="cite"><span class="Apple-tab-span" style="white-space:pre"> </span>Could you perform the following test by any chance: state iPlayer and yor typical downloads and then have a look at <a href="http://gw.home.lan:81und">http://gw.home.lan:81und</a> the following tab chain Status -> Realtime Graphs -> Traffic -> Realtime Traffic. If during your test the Outbound rate stays well below you shaped limit and you still encounter the stream failure I would say it is save to ignore the link layer adjustments as cause of your issues.<br></blockquote><div><br></div>Am happy reducing rate to fifty per cent, but the uplink appears to have difficulty operating below circa 500 kbits/s. This should not be so. I shall try a fourth time.<br><blockquote type="cite"><br><br><blockquote type="cite"><blockquote type="cite"><br>YouTube has no problems.<br><br>I remain unclear whether the use of tc-stab and htb are mutually exclusive options, using the present stock interface.<br></blockquote></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Well, depending on the version of the cerowrt you use, <3.10.9-1 I believe lacks a functional HTB link layer adjustment mechanism, so you should select tc_stab. My most recent modifications to Toke and Dave's AQM package does only allow you to select one or the other. In any case selecting BOTH is not a reasonable thing to do, because best case it will only apply overhead twice, worst case it would also do the (link layer adjustments) LLA twice<br></blockquote><div><br></div><br><blockquote type="cite"><font color="#040404">See initial comments.</font><br><br><blockquote type="cite"><blockquote type="cite"><br>The current ISP connection is IPoA LLC.<br></blockquote><br>Correction - Bridged LLC. <br></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Well, I think you should try to figure out your overhead empirically and check the encapsulation. I would recommend you run the following script on you r link over night and send me the log file it produces:<br><br>#! /bin/bash<br># TODO use seq or bash to generate a list of the requested sizes (to alow for non-equdistantly spaced sizes)<br><br># Telekom Tuebingen Moltkestrasse 6<br>TECH=ADSL2<br># finding a proper target IP is somewhat of an art, just traceroute a remote site <br># and find the nearest host reliably responding to pings showing the smallet variation of pingtimes<br>TARGET=87.186.197.70<span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space:pre"> </span># T<br>DATESTR=`date +%Y%m%d_%H%M%S`<span class="Apple-tab-span" style="white-space:pre"> </span># to allow multiple sequential records<br>LOG=ping_sweep_${TECH}_${DATESTR}.txt<br><br><br># by default non-root ping will only end one packet per second, so work around that by calling ping independently for each package<br># empirically figure out the shortest period still giving the standard ping time (to avoid being slow-pathed by our host)<br>PINGPERIOD=0.01<span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space:pre"> </span># in seconds<br>PINGSPERSIZE=10000<br><br># Start, needed to find the per packet overhead dependent on the ATM encapsulation<br># to reliably show ATM quantization one would like to see at least two steps, so cover a range > 2 ATM cells (so > 96 bytes)<br>SWEEPMINSIZE=16<span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space:pre"> </span># 64bit systems seem to require 16 bytes of payload to include a timestamp...<br>SWEEPMAXSIZE=116<br><br><br>n_SWEEPS=`expr ${SWEEPMAXSIZE} - ${SWEEPMINSIZE}`<br><br><br>i_sweep=0<br>i_size=0<br><br>while [ ${i_sweep} -lt ${PINGSPERSIZE} ]<br>do<br> (( i_sweep++ ))<br> echo "Current iteration: ${i_sweep}"<br> # now loop from sweepmin to sweepmax<br> i_size=${SWEEPMINSIZE}<br> while [ ${i_size} -le ${SWEEPMAXSIZE} ]<br> do<br><span class="Apple-tab-span" style="white-space:pre"> </span>echo "${i_sweep}. repetition of ping size ${i_size}"<br><span class="Apple-tab-span" style="white-space:pre"> </span>ping -c 1 -s ${i_size} ${TARGET} >> ${LOG} &<br><span class="Apple-tab-span" style="white-space:pre"> </span>(( i_size++ ))<br><span class="Apple-tab-span" style="white-space:pre"> </span># we need a sleep binary that allows non integer times (GNU sleep is fine as is sleep of macosx 10.8.4)<br><span class="Apple-tab-span" style="white-space:pre"> </span>sleep ${PINGPERIOD}<br> done<br>done<br><br>#tail -f ${LOG}<br><br>echo "Done... ($0)"<br><br><br>Please set TARGET to the closest IP host on the ISP side of your link that gives reliable ping RTTs (using "ping -c 100 -s 16 your.best.host.ip"). Also test whether the RTTs are in the same ballpark when you reduce the ping period to 0.01 (you might have to increase the period until the RTTs are close to the standard 1 ping per second case). I can then run this through my matlab code to detect the actual overhead. (I am happy to share the code as well, if you have matlab available; it might even run under octave but I have not tested that since the last major changes).<br></blockquote><div><br></div>To follow at some point.<br><blockquote type="cite"><br><br><blockquote type="cite"><br><blockquote type="cite">Whatever byte value is used for tc-stab makes no change.<br></blockquote></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>I assume you talk about the overhead? Missing link layer adjustment will eat between 50% and 10% of your link bandwidth, while missing overhead values will be more benign. The only advise I can give is to pick the overhead that actually describes your link. I am willing to help you figure this out.<br></blockquote><div><br></div>The link is bridged LLC. Have been using 18 and 32 for test purposes. I shall move to PPPoA VC-MUX in 4 months.<br><blockquote type="cite"><br><blockquote type="cite"><blockquote type="cite"><br>I have applied the ingress modification to simple.qos, keeping the original version., and tested both.<br></blockquote></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>For which cerowrt version? It is only expected to do something for 3.10.9-1 and upwards, before that the HTB lionklayer adjustment did NOT work.<br></blockquote><div><br></div>Using 3.10.9-2</div><div><br><blockquote type="cite"><br><blockquote type="cite"><blockquote type="cite"><br>I have changed the Powerline adaptors I use to ones with known smaller buffers, though this is unlikely to be a ate-limiting step.<br><br>I have changed the 2Wire gateway, known to be heavily buffered, with a bridged Huawei HG612, with a Broadcom 6368 SoC.<br><br>This device has a permanently on telnet interface, with a simple password, which cannot be changed other than by firmware recompilation…<br><br>Telnet, however, allows txqueuelen to be reduced from 1000 to 0.<br><br>None of these changes affect the problematic uplink delay.<br></blockquote></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>So how did you measure the uplink delay? The RRUL plots you sent me show an increase in ping RTT from around 50ms to 80ms with tc_stab and fq_codel on simplest.qos, how does that reconcile with 650ms uplink delay, netalyzr?<br></blockquote><div><br></div>Max Planck and Netalyzr produce the same figure. I use both, but Max Planck gives you circa 3 tries per IP address per 24 hours.<br><blockquote type="cite"><br><blockquote type="cite"><blockquote type="cite"><br><br>On 24 Aug 2013, at 21:51, Sebastian Moeller <<a href="mailto:moeller0@gmx.de">moeller0@gmx.de</a>> wrote:<br><br><blockquote type="cite">Hi Dave,<br><br><br>On Aug 23, 2013, at 22:29 , Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br><br><blockquote type="cite"><br><br><br>On Fri, Aug 23, 2013 at 12:56 PM, Sebastian Moeller <<a href="mailto:moeller0@gmx.de">moeller0@gmx.de</a>> wrote:<br>Hi Dave,<br><br>I guess I found the culprit:<br><br>once I added $ADSLL to the ingress() in simple.qos:<br><br><br><br>I had that in there originally. I ripped it out because it seemed to help with ADSL at the time - as I was unaware the extent that the whole subsystem was busted!<br></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Ah, and I had added my stab based version to both ingress() and egress() assuming that both links need to be kept under control. So with fixed htb link layer adjustment (LLA) it only worked on the uplink and in retrospect if I look at my initial test data I actually see one of the hallmarks of a working LLA for the upstream. (The upstream good-put was reduced compared to the no LLA test, caused by LLA making the actually sent packets larger so fewer packets fit through the shaped link). But since I was not expecting only half a working system I overlooked that in the data.<br><span class="Apple-tab-span" style="white-space:pre"> </span>But looking at the latency of the ping RTT probes it becomes quite clear that only doing link layer adjustments on the uplink is even worse than not doing it all (because the latency is still almost as bad as without LLA but the up-link bandwidth is reduced).<br><br><blockquote type="cite"><br>I like to think of the process we've just gone through as "wow, we just fixed the uk, and a few other countries". :) Feels kind of good, doesn't it? (Too bad the pay sucks.)<br></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Oh, I can not complain about pay, I have a day job in totally different field, so this is more of a hobby for me :) <br><br><blockquote type="cite">I mean, jeeze, chopping another 30+ms off the latency of that many systems should get medals from economists worldwide monitoring productivity. <br><br>Does anyone have a date/kernel version on when linklayer overhead compensation stopped working? There was a bug even prior to 3.8 that looked bad. (and RED was busted for 3 years).<br><br>Another step would be trying to improve openwrt's native qos system somewhat in the DSL case. They don't use this subsystem (probably because it didn't work), and it's also broke on ipv6. (They use conn track)<br></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Oh, in the bql-40 time frame I hacked the stab based LLA into their generate.sh and it worked quite well, even though at time my measurements were quite crude. SInce their qos scripts are HFSC based the HTB private implementation is not going to do them any good. Luckily now that does not seem to matter as both methods now perform identically as they should. (Well, now Jespers last changes are nicer than the old table lookup, but it should be relatively say to implant the same for stab, heck once I got my linux machine up I might take this as my first attempt at making local changes to the kernel :) ). So adding it to openwrt proper should be a piece of cake. Do you know by any chance who would be the best person to contact for that, ?<br><br><blockquote type="cite"><br>At some point I'd like to have a mechanism for saner diffserv classification on egress, and to clamp ingress values to egress ones. There is a ton of work going on on finding sane codepoints on webrtc in the ietf….<br><br><br><br><br><br><br>ingress() {<br><br>CEIL=$DOWNLINK<br>PRIO_RATE=`expr $CEIL / 3` # Ceiling for prioirty<br>BE_RATE=`expr $CEIL / 6` # Min for best effort<br>BK_RATE=`expr $CEIL / 6` # Min for background<br>BE_CEIL=`expr $CEIL - 64` # A little slop at the top<br><br>LQ="quantum `get_mtu $IFACE`"<br><br>$TC qdisc del dev $IFACE handle ffff: ingress 2> /dev/null<br>$TC qdisc add dev $IFACE handle ffff: ingress<br><br>$TC qdisc del dev $DEV root 2> /dev/null<br>$TC qdisc add dev $DEV root handle 1: ${STABSTRING} htb default 12<br>$TC class add dev $DEV parent 1: classid 1:1 htb $LQ rate ${CEIL}kbit ceil ${CEIL}kbit $ADSLL<br>$TC class add dev $DEV parent 1:1 classid 1:10 htb $LQ rate ${CEIL}kbit ceil ${CEIL}kbit prio 0 $ADSLL<br>$TC class add dev $DEV parent 1:1 classid 1:11 htb $LQ rate 32kbit ceil ${PRIO_RATE}kbit prio 1 $ADSLL<br>$TC class add dev $DEV parent 1:1 classid 1:12 htb $LQ rate ${BE_RATE}kbit ceil ${BE_CEIL}kbit prio 2 $ADSLL<br>$TC class add dev $DEV parent 1:1 classid 1:13 htb $LQ rate ${BK_RATE}kbit ceil ${BE_CEIL}kbit prio 3 $ADSLL<br><br># I'd prefer to use a pre-nat filter but that causes permutation...<br><br>$TC qdisc add dev $DEV parent 1:11 handle 110: $QDISC limit 1000 $ECN `get_quantum 500` `get_flows ${PRIO_RATE}`<br>$TC qdisc add dev $DEV parent 1:12 handle 120: $QDISC limit 1000 $ECN `get_quantum 1500` `get_flows ${BE_RATE}`<br>$TC qdisc add dev $DEV parent 1:13 handle 130: $QDISC limit 1000 $ECN `get_quantum 1500` `get_flows ${BK_RATE}`<br><br>diffserv $DEV<br><br>ifconfig $DEV up<br><br># redirect all IP packets arriving in $IFACE to ifb0<br><br>$TC filter add dev $IFACE parent ffff: protocol all prio 10 u32 \<br>match u32 0 0 flowid 1:1 action mirred egress redirect dev $DEV<br><br>}<br><br>I get basically the same RRUL ping RTTs for htb_private as for tc_stab. So Jesper was right the patch seems to fix the issue. I guess I should send out my current version of yours and Toke's AQM scripts soon.<br><br><br><br>Best<br>Sebastian<br><br>P.S.: I am not sure whether I want to tackle the PIE issue today...<br><br><br><br>On Aug 23, 2013, at 21:47 , Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br><br><blockquote type="cite">quick note: running this script requires that you<br><br>ifconfig ifb0 up<br><br>at some point.<br></blockquote><br>In my case on cerowrt you took care of that already...<br><br><br><blockquote type="cite"><br><br>On Fri, Aug 23, 2013 at 12:38 PM, Sebastian Moeller <<a href="mailto:moeller0@gmx.de">moeller0@gmx.de</a>> wrote:<br>Hi Dave,<br><br>On Aug 23, 2013, at 07:13 , Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br><br><blockquote type="cite"><br><br><br>On Thu, Aug 22, 2013 at 5:52 PM, Sebastian Moeller <<a href="mailto:moeller0@gmx.de">moeller0@gmx.de</a>> wrote:<br>Hi List, hi Jesper,<br><br>So I tested 3.10.9-1 to assess the status of the HTB atm link layer adjustments to see whether the recent changes resurrected this feature.<br>Unfortunately the htb_private link layer adjustments still is broken (RRUL ping RTT against Toke's netperf host in Germany of ~80ms, same as without link layer adjustments). On the bright side the tc_stab method still works as well as before (ping RTT around 40ms).<br>I would like to humbly propose to use the tc stab method in cerowrt to perform ATM link layer adjustments as default. To repeat myself, simply telling the kernel a lie about the packet size seems more robust than fudging HTB's rate tables. Especially since the kernel already fudges the packet size to account for the ethernet header and then some, so this path should receive more scrutiny by virtue of having more users?<br><br>It's my hope that the atm code works but is misconfigured. You can output the tc commands by overriding the TC variable with TC="echo tc" and paste here.<br></blockquote><br>So I went for TC="logger tc" and used log read to harvest as I could not find the echo output, but I guess that should not matter. So here is the result (slightly edited to get rid of the log timestamps and log level):<br><br>tc qdisc del dev ge00 root<br>tc qdisc add dev ge00 root handle 1: htb default 12<br>tc class add dev ge00 parent 1: classid 1:1 htb quantum 1500 rate 2430kbit ceil 2430kbit mpu 0 linklayer adsl overhead 40 mtu 2047<br>tc class add dev ge00 parent 1:1 classid 1:10 htb quantum 1500 rate 2430kbit ceil 2430kbit prio 0 mpu 0 linklayer adsl overhead 40 mtu 2047<br>tc class add dev ge00 parent 1:1 classid 1:11 htb quantum 1500 rate 128kbit ceil 810kbit prio 1 mpu 0 linklayer adsl overhead 40 mtu 2047<br>tc class add dev ge00 parent 1:1 classid 1:12 htb quantum 1500 rate 405kbit ceil 2366kbit prio 2 mpu 0 linklayer adsl overhead 40 mtu 2047<br>tc class add dev ge00 parent 1:1 classid 1:13 htb quantum 1500 rate 405kbit ceil 2366kbit prio 3 mpu 0 linklayer adsl overhead 40 mtu 2047<br>tc qdisc add dev ge00 parent 1:11 handle 110: fq_codel limit 600 noecn quantum 300<br>tc qdisc add dev ge00 parent 1:12 handle 120: fq_codel limit 600 noecn quantum 300<br>tc qdisc add dev ge00 parent 1:13 handle 130: fq_codel limit 600 noecn quantum 300<br>tc filter add dev ge00 parent 1:0 protocol all prio 999 u32 match ip protocol 0 0x00 flowid 1:12<br>tc filter add dev ge00 parent 1:0 protocol ip prio 1 handle 1 fw classid 1:11<br>tc filter add dev ge00 parent 1:0 protocol ip prio 2 handle 2 fw classid 1:12<br>tc filter add dev ge00 parent 1:0 protocol ip prio 3 handle 3 fw classid 1:13<br>tc filter add dev ge00 parent 1:0 protocol ipv6 prio 4 handle 1 fw classid 1:11<br>tc filter add dev ge00 parent 1:0 protocol ipv6 prio 5 handle 2 fw classid 1:12<br>tc filter add dev ge00 parent 1:0 protocol ipv6 prio 6 handle 3 fw classid 1:13<br>tc filter add dev ge00 parent 1:0 protocol arp prio 7 handle 1 fw classid 1:11<br>tc qdisc del dev ge00 handle ffff: ingress<br>tc qdisc add dev ge00 handle ffff: ingress<br>tc qdisc del dev ifb0 root<br>tc qdisc add dev ifb0 root handle 1: htb default 12<br>tc class add dev ifb0 parent 1: classid 1:1 htb quantum 1500 rate 15494kbit ceil 15494kbit<br>tc class add dev ifb0 parent 1:1 classid 1:10 htb quantum 1500 rate 15494kbit ceil 15494kbit prio 0<br>tc class add dev ifb0 parent 1:1 classid 1:11 htb quantum 1500 rate 32kbit ceil 5164kbit prio 1<br>tc class add dev ifb0 parent 1:1 classid 1:12 htb quantum 1500 rate 2582kbit ceil 15430kbit prio 2<br>tc class add dev ifb0 parent 1:1 classid 1:13 htb quantum 1500 rate 2582kbit ceil 15430kbit prio 3<br>tc qdisc add dev ifb0 parent 1:11 handle 110: fq_codel limit 1000 ecn quantum 500<br>tc qdisc add dev ifb0 parent 1:12 handle 120: fq_codel limit 1000 ecn quantum 1500<br>tc qdisc add dev ifb0 parent 1:13 handle 130: fq_codel limit 1000 ecn quantum 1500<br>tc filter add dev ifb0 parent 1:0 protocol all prio 999 u32 match ip protocol 0 0x00 flowid 1:12<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 1 u32 match ip tos 0x00 0xfc classid 1:12<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 2 u32 match ip6 priority 0x00 0xfc classid 1:12<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 3 u32 match ip tos 0x20 0xfc classid 1:13<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 4 u32 match ip6 priority 0x20 0xfc classid 1:13<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 5 u32 match ip tos 0x10 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 6 u32 match ip6 priority 0x10 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 7 u32 match ip tos 0xb8 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 8 u32 match ip6 priority 0xb8 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 9 u32 match ip tos 0xc0 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 10 u32 match ip6 priority 0xc0 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 11 u32 match ip tos 0xe0 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 12 u32 match ip6 priority 0xe0 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ip parent 1:0 prio 13 u32 match ip tos 0x90 0xfc classid 1:11<br>tc filter add dev ifb0 protocol ipv6 parent 1:0 prio 14 u32 match ip6 priority 0x90 0xfc classid 1:11<br>tc filter add dev ifb0 parent 1:0 protocol arp prio 15 handle 1 fw classid 1:11<br>tc filter add dev ge00 parent ffff: protocol all prio 10 u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb0<br><br>I notice it seem this only shows up for egress(), but looking at simple.qos ingress() is not addend ${ADSLL} at all so that is to be expected. There is nothing in dmesg at all.<br><br>So I am off to add ADSLL to ingress() as well and then test RRUL again...<br><br><br>Jesper please let me know if this looks reasonable, at least to my eye it seems to fit with what "tc disc add htb help" tells me. I tried your:<br>echo "func __detect_linklayer +p" /sys/kernel/debug/dynamic_debug/control<br>but got no output even though debugs was already mounted…<br><br>Best<br>Sebastian<br><br><blockquote type="cite"><br>Now, I have been testing this using Dave's most recent cerowrt alpha version with a 3.10.9 kernel on mips hardware, I think this kernel should contain all htb fixes including commit 8a8e3d84b17 (net_sched: restore "linklayer atm" handling) but am not fully sure.<br><br>It does.<br><br>`@Dave is there an easy way to find which patches you applied to the kernels of the cerowrt (testing-)releases?<br><br>Normally I DO commit stuff that is in testing, but my big push this time around was to get everything important into mainline 3.10, as it will be the "stable" release for a good long time.<br><br>So I am still mostly working the x86 side at the moment. I WAS kind of hoping that everything I just landed would make it up to 3.10. But for your perusal:<br><br><a href="http://snapon.lab.bufferbloat.net/~cero2/patches/3.10.9-1/">http://snapon.lab.bufferbloat.net/~cero2/patches/3.10.9-1/</a> has most of the kernel patches I used in it. 3.10.9-2 has the ipv6subtrees patch ripped out due to another weird bug I'm looking at. (It also has support for ipv6 nat thx to the ever prolific stephen walker heeding the call for patches...). 100% totally untested, I have this weird bug to figure out how to fix next:<br><br><a href="http://lists.alioth.debian.org/pipermail/babel-users/2013-August/001419.html">http://lists.alioth.debian.org/pipermail/babel-users/2013-August/001419.html</a><br><br>I fear it's a comparison gone south, maybe in bradley's optimizations for not kernel trapping, don't know.<br><br>3.10.9-2 also disables dnsmasq's dhcpv6 in favor of 6relayd. I HATE losing the close naming integration, but, had to try this....<br><br>If you guys want me to start committing and pushing patches again, I'll do it, but most of that stuff will end up in 3.10.10, I think, in a couple days. The rest might make 3.12. Pie has to survive scrutiny on the netdev list in particular.<br><br>While I have you r attention :) I also tested 3.10.9-1's pie and it is way better than 3.10.6-1's (RRUL ping RTTs around 110 ms instead of 3000ms) but still worse than fq_codel (ping RTTs around 40ms with proper atm link layer adjustments).<br><br>This is with simple.qos I imagine? Simplest should do better than that with pie. Judging from how its estimator works I think it will do badly with multiple queues. But testing will tell...<br><br>But, yea, this pie is actually usable, and the previous wasn't. Thank you for looking at it!<br><br>It is different from cisco's last pie drop in that it can do ecn, does local congestion notification, has a better use of net_random, it's mostly KernelStyle, and I forget what else.<br><br>There is still a major rounding error in the code, and I'd like cisco to fix the api so it uses identical syntax to codel. Right now you specify "target 8" to get "target 7", and the "ms" is implied. target 5 becomes target 3. The default target is a whopping 20 (rounded to 19), which is in part where your 70+ms of extra delay came from.<br><br>Multiple parties have the delusion that 20ms is "good enough".<br><br>Part of the remaining delay may also be rounding error. Cisco uses kernels with HZ=1000, cero uses HZ=250.....<br><br>Anyway, to get more comparable tests... you can fiddle with the two $QDISC lines in simple*.qos to add a target 8 to get closer to a codel 5ms config, but that would break a codel config which treats target 8 as target 8us.<br><br>I MIGHT, if I get energetic enough, fix the API, the time accounting, and a few other things in pie, the problem is, that ns2_codel seems still more effective on most workloads and *fq_codel smokes absolutely everything. There are a few places where pie is a win over straight codel, notably on packet floods. And it may well be easier to retrofit into existing hardware fast path designs.<br><br>I worry about interactions between pie and other stuff. It seems inevitable at this point that some form of pie will be widely deployed, and I simply haven't tried enough traffic types and RTTs to draw a firm conclusion, period. Long RTTs are the last big place where codel and pie and fq_codel have to be seriously tested.<br><br>ns2_codel is looking pretty good now, at the shorter RTTs I've tried. A big problem I have is getting decent long RTT emulation out of netem (some preliminary code is up at github)<br><br>... and getting cero stable enough for others to actually use - next up is fixing the userspace problems.<br><br>... and trying to make a small dent in the wifi problem along the way (couple commits coming up)<br><br>... and find funding to get through the winter.<br><br>There's probably a few other things that are on that list but I forget. Oh, yea, since the aqm wg was voted on to be formed, I decided I could quit smoking.<br><br>While I am not able to build kernels, it seems that I am able to quickly test whether link layer adjustments work or not. SO aim happy to help where I can :)<br><br>Give pie target 8 and target 5 a shot, please? ns2_codel target 3ms and target 7ms, too. fq_codel, same....<br><br>tc -s qdisc show dev ge00<br>tc -s qdisc show dev ifb0<br><br>would be useful info to have in general after each test.<br><br>TIA.<br><br>There are also things like tcp_upload and tcp_download and tcp_bidirectional that are useful tests in the rrul suite.<br><br>Thank you for your efforts on these early alpha releases. I hope things will stablize more soon, and I'll fold your aqm stuff into my next attempt this weekend.<br><br>This is some of the stuff I know that needs fixing in userspace:<br><br>* TODO readlink not found<br>* TODO netdev user missing<br>* TODO Wed Dec 5 17:14:46 2012 authpriv.error dnsmasq: found already running DHCP-server on interface 'se00' refusing to start, use 'option force 1' to override<br>* TODO [ 18.480468] Mirror/redirect action on<br>[ 18.539062] Failed to load ipt action<br>* upload and download are reversed in aqm<br>* BCP38<br>* Squash CS values<br>* Replace ntp<br>* Make ahcp client mode<br>* Drop more privs for polipo<br>* upnp<br>* priv separation<br>* Review FW rules<br>* dhcpv6 support<br>* uci-defaults/make-cert.sh uses a bad path for px5g<br>* Doesn't configure the web browser either<br><br><br><br><br>Best<br>Sebastian<br><br><br><br><br>--<br>Dave Täht<br><br>Fixing bufferbloat with cerowrt: <a href="http://www.teklibre.com/cerowrt/subscribe.html">http://www.teklibre.com/cerowrt/subscribe.html</a><br></blockquote><br><br><br><br>--<br>Dave Täht<br><br>Fixing bufferbloat with cerowrt: <a href="http://www.teklibre.com/cerowrt/subscribe.html">http://www.teklibre.com/cerowrt/subscribe.html</a><br></blockquote><br><br><br><br>-- <br>Dave Täht<br><br>Fixing bufferbloat with cerowrt: <a href="http://www.teklibre.com/cerowrt/subscribe.html">http://www.teklibre.com/cerowrt/subscribe.html</a><br></blockquote><br>_______________________________________________<br>Cerowrt-devel mailing list<br><a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br><a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br></blockquote><br>_______________________________________________<br>Cerowrt-devel mailing list<br><a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br><a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br></blockquote><br>_______________________________________________<br>Cerowrt-devel mailing list<br><a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br><a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br></blockquote><br></blockquote></div><br></div></div>_______________________________________________<br>Cerowrt-devel mailing list<br><a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br>https://lists.bufferbloat.net/listinfo/cerowrt-devel<br></blockquote></div><br></div></body></html>