[Cerowrt-devel] Fwd: Re: CeroWrt 3.10.24-8 badly bloated?

Dave Taht dave.taht at gmail.com
Fri Dec 27 20:14:45 EST 2013


I note that nfq codel (which is closer to sfq in outlook than fq codel
(more like drr)) might be better in fred's case. But I'd hope for his
bandwidth and workload (mostly movies) the larger target will compensate
for the problems he'd had. But I don't know so...

In the interest of science...

I would certainly like a few days subjective non benchmark testing vs a vs
pie from more people. Certainly I can "feel" a difference of fq codel vs
pie.  (Either are massively better than current cable modems)
videoconferencing, telephony, movies, games, web browsing...

Anybody else willing to spend a week on pie? It is what is mandated to be
in docsis 3.1 (probably with a 10ms target)
 On Dec 27, 2013 4:07 PM, "Sebastian Moeller" <moeller0 at gmx.de> wrote:

> Hi Fred,
>
>
> On Dec 27, 2013, at 21:20 , Fred Stratton <fredstratton at imap.cc> wrote:
>
> > I guessed input into Dangerous options was required, but was unsure of
> the syntax.  Quite how /etc/config/sqm is read is another matter to be
> addressed over ssh.
> >
> > I shall paste all in one email response, for you to rearrange at the
> other end. Both use simple.qos.
> >
> > With pie:
> >
> > tc -d qdisc
> > qdisc fq_codel a: dev se00 root refcnt 2 limit 1000p flows 1024 quantum
> 1000 target 5.0ms interval 100.0ms ecn
> > qdisc htb 1: dev ge00 root refcnt 2 r2q 10 default 12
> direct_packets_stat 6 ver 3.17
> >  linklayer atm overhead 18 mtu 2047 tsize 128
> > qdisc pie 110: dev ge00 parent 1:11 limit 1000p target 25000 tupdate
> 30000 alpha 2 beta 20
> > qdisc pie 120: dev ge00 parent 1:12 limit 1000p target 25000 tupdate
> 30000 alpha 2 beta 20
> > qdisc pie 130: dev ge00 parent 1:13 limit 1000p target 25000 tupdate
> 30000 alpha 2 beta 20
> > qdisc ingress ffff: dev ge00 parent ffff:fff1 ----------------
> > qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 12
> direct_packets_stat 0 ver 3.17
> >  linklayer atm overhead 18 mtu 2047 tsize 128
> > qdisc pie 110: dev ifb0 parent 1:11 limit 1000p target 20000 tupdate
> 30000 alpha 2 beta 20 ecn
> > qdisc pie 120: dev ifb0 parent 1:12 limit 1000p target 20000 tupdate
> 30000 alpha 2 beta 20 ecn
> > qdisc pie 130: dev ifb0 parent 1:13 limit 1000p target 20000 tupdate
> 30000 alpha 2 beta 20 ecn
> > qdisc mq 1: dev sw10 root
> > qdisc fq_codel 10: dev sw10 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev sw10 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev sw10 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev sw10 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc mq 1: dev sw00 root
> > qdisc fq_codel 10: dev sw00 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev sw00 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev sw00 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev sw00 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc mq 1: dev gw00 root
> > qdisc fq_codel 10: dev gw00 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev gw00 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev gw00 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev gw00 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc mq 1: dev gw10 root
> > qdisc fq_codel 10: dev gw10 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev gw10 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev gw10 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev gw10 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc fq_codel a: dev pppoe-ge00 root refcnt 2 limit 1000p flows 1024
> quantum 1000 target 5.0ms interval 100.0ms ecn
> >
> >
> > tc class show dev ge00
> > class htb 1:11 parent 1:1 leaf 110: prio 1 rate 128000bit ceil 316000bit
> burst 1600b cburst 1599b
> > class htb 1:1 root rate 950000bit ceil 950000bit burst 1599b cburst 1599b
> > class htb 1:10 parent 1:1 prio 0 rate 950000bit ceil 950000bit burst
> 1599b cburst 1599b
> > class htb 1:13 parent 1:1 leaf 130: prio 3 rate 158000bit ceil 934000bit
> burst 1599b cburst 1599b
> > class htb 1:12 parent 1:1 leaf 120: prio 2 rate 158000bit ceil 934000bit
> burst 1599b cburst 1599b
> >
> > with fq_codel, and 25ms target:
> >
> > tc -d qdisc
> > qdisc fq_codel a: dev se00 root refcnt 2 limit 1000p flows 1024 quantum
> 1000 target 5.0ms interval 100.0ms ecn
> > qdisc htb 1: dev ge00 root refcnt 2 r2q 10 default 12
> direct_packets_stat 9 ver 3.17
> >  linklayer atm overhead 18 mtu 2047 tsize 128
> > qdisc fq_codel 110: dev ge00 parent 1:11 limit 1000p flows 1024 quantum
> 300 target 25.0ms interval 100.0ms
> > qdisc fq_codel 120: dev ge00 parent 1:12 limit 1000p flows 1024 quantum
> 300 target 25.0ms interval 100.0ms
> > qdisc fq_codel 130: dev ge00 parent 1:13 limit 1000p flows 1024 quantum
> 300 target 25.0ms interval 100.0ms
>
>         Great, so this worked okay, all three fq_codels on ge00 (egress)
> have an interval of 25ms, just as the Doctor (aka, Dave) ordered :). Does
> this help to get fq_codel subjectively judged performance to be equal to
> pie?
>
> Best
>         Sebastian
>
>
>
> > qdisc ingress ffff: dev ge00 parent ffff:fff1 ----------------
> > qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 12
> direct_packets_stat 0 ver 3.17
> >  linklayer atm overhead 18 mtu 2047 tsize 128
> > qdisc fq_codel 110: dev ifb0 parent 1:11 limit 1000p flows 1024 quantum
> 500 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 120: dev ifb0 parent 1:12 limit 1000p flows 1024 quantum
> 1500 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 130: dev ifb0 parent 1:13 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc mq 1: dev sw00 root
> > qdisc fq_codel 10: dev sw00 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev sw00 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev sw00 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev sw00 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc mq 1: dev sw10 root
> > qdisc fq_codel 10: dev sw10 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev sw10 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev sw10 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev sw10 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc mq 1: dev gw00 root
> > qdisc fq_codel 10: dev gw00 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev gw00 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev gw00 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev gw00 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc mq 1: dev gw10 root
> > qdisc fq_codel 10: dev gw10 parent 1:1 limit 800p flows 1024 quantum 500
> target 10.0ms interval 100.0ms
> > qdisc fq_codel 20: dev gw10 parent 1:2 limit 800p flows 1024 quantum 300
> target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 30: dev gw10 parent 1:3 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms ecn
> > qdisc fq_codel 40: dev gw10 parent 1:4 limit 1000p flows 1024 quantum
> 300 target 5.0ms interval 100.0ms
> > qdisc fq_codel a: dev pppoe-ge00 root refcnt 2 limit 1000p flows 1024
> quantum 1000 target 5.0ms interval 100.0ms ecn
> >
> >
> > tc class show dev ge00
> > class htb 1:11 parent 1:1 leaf 110: prio 1 rate 128000bit ceil 316000bit
> burst 1600b cburst 1599b
> > class htb 1:1 root rate 950000bit ceil 950000bit burst 1599b cburst 1599b
> > class htb 1:10 parent 1:1 prio 0 rate 950000bit ceil 950000bit burst
> 1599b cburst 1599b
> > class htb 1:13 parent 1:1 leaf 130: prio 3 rate 158000bit ceil 934000bit
> burst 1599b cburst 1599b
> > class htb 1:12 parent 1:1 leaf 120: prio 2 rate 158000bit ceil 934000bit
> burst 1599b cburst 1599b
> > class fq_codel 110:2c9 parent 110:
> > class fq_codel 120:1a5 parent 120:
> > class fq_codel 120:31f parent 120:
> >
> >
> >
> > On 27/12/13 19:49, Dave Taht wrote:
> >>
> >> Pie has a default latency target of 20ms, fq codel 5ms. (But the fq
> code target matters less as the target only applies to queue building flows)
> >>
> >> A packet takes 13ms to transit the device at 1mbit.
> >>
> >> There is a change to fq codel in this release that should make fiddling
> with target a low speeds less needed. (But might have other problems) Still
> a comparison at roughly the same target vs a vs pie in your environment
> would be very interesting.
> >>
> >> I suggested 25ms as a test (as pie never makes 20ms anyway)
> >>
> >> I came close to inserting a simple formula to start increasing the
> target below 4mbit in this release.
> >>
> >> On Dec 27, 2013 11:25 AM, "Sebastian Moeller" <moeller0 at gmx.de> wrote:
> >> >
> >> > Hi Fred,
> >> >
> >> > you could try to put "target 25ms" without the quotes into the
> advanced egress options field in the "Queue Discipline" tab, that is
> exposed after checking "Show Dangerous Configuration". I would love to hear
> whether that worked or not (I am not able to test anything myself). Maybe
> posting the output of "tc -d qdisc" and "tc class show dev ge00" would
> help. Good luck…
> >> >
> >> >
> >> > Best Regards
> >> >         Sebastian
> >> >
> >> >
> >> > On Dec 27, 2013, at 20:20 , Fred Stratton <fredstratton at imap.cc>
> wrote:
> >> >
> >> > > I have been using pie for approximately 3 weeks.
> >> > >
> >> > > You are correct, in that the outbound speed is about 800 - 900 kb/s.
> >> > >
> >> > > I shall try what you suggest, but do not know how to express the
> target of 25 ms as a configuration option.
> >> > >
> >> > >
> >> > > On 27/12/13 19:15, Dave Taht wrote:
> >> > >> Dear fred: are you sticking with pie? I was going to suggest you
> try fq codel with a target 25ms on your outbound. (You are at 800kbit or so
> as best I recall?)
> >> > >>
> >> > >> On Dec 27, 2013 11:10 AM, "Fred Stratton" <fredstratton at imap.cc>
> wrote:
> >> > >> I upgraded to 3.10.24-8 on 2013-12-23.
> >> > >>
> >> > >> I modified /etc/fixdaemons, adding
> >> > >> /etc/init.d/sqm restart
> >> > >>
> >> > >> input the appropriate sqm settings, transcribed from aqm
> >> > >>
> >> > >> rebooted
> >> > >>
> >> > >> and the build works very well. For ADSL2+ here, it is the best so
> far.
> >> > >>
> >> > >>
> >> > >> On 27/12/13 18:55, Dave Taht wrote:
> >> > >>> A race condition appears to have crept in...
> >> > >>>
> >> > >>> ---------- Forwarded message ----------
> >> > >>> From: "Dave Taht" <dave.taht at gmail.com>
> >> > >>> Date: Dec 27, 2013 10:47 AM
> >> > >>> Subject: Re: [Cerowrt-devel] CeroWrt 3.10.24-8 badly bloated?
> >> > >>> To: "Richard E. Brown" <richb.hanover at gmail.com>
> >> > >>> Cc:
> >> > >>>
> >> > >>> Probably didn't start sqm properly
> >> > >>>
> >> > >>> Restart it by hand via /etc/init.d/sqm restart
> >> > >>>
> >> > >>> tc -s qdisc show dev ge00
> >> > >>>
> >> > >>> Should show htb and fq codel.
> >> > >>>
> >> > >>> On Dec 27, 2013 10:36 AM, "Rich Brown" <richb.hanover at gmail.com>
> wrote:
> >> > >>> So I screwed up my courage and replaced my 3.10.18-? firmware in
> my primary router with 3.10.24-8. That version had worked well as a
> secondary, so I figured, What the heck… Let’s give it try.
> >> > >>>
> >> > >>> The result was not pretty. I set my link speeds in the SQM page,
> chose the defaults for the Queue Discipline tab, and link layer to ATM with
> no additional overhead for my DSL link.
> >> > >>>
> >> > >>> Ping times to google are normally ~51-54 msec. But when I fired
> up speedtest.net, they jumped to 1500-2500 msec. Is there something I
> should look at before reverting? Thanks.
> >> > >>>
> >> > >>> Rich
> >> > >>> _______________________________________________
> >> > >>> Cerowrt-devel mailing list
> >> > >>> Cerowrt-devel at lists.bufferbloat.net
> >> > >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >> > >>>
> >> > >>>
> >> > >>> _______________________________________________
> >> > >>> Cerowrt-devel mailing list
> >> > >>>
> >> > >>> Cerowrt-devel at lists.bufferbloat.net
> >> > >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >> > >>
> >> > >
> >> > > _______________________________________________
> >> > > Cerowrt-devel mailing list
> >> > > Cerowrt-devel at lists.bufferbloat.net
> >> > > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20131227/f222a507/attachment-0002.html>


More information about the Cerowrt-devel mailing list