[Codel] what am i doing wrong? why isn't codel working?

Dave Taht dave.taht at gmail.com
Tue Jun 24 14:20:46 EDT 2014


On Tue, Jun 24, 2014 at 10:29 AM, Brian J. Murrell
<brian at interlinx.bc.ca> wrote:
> On Tue, 2014-06-24 at 09:26 -0700, Dave Taht wrote:
>> Well, if your provided rate
>
> I assume this is the rate from my DSL modem to the ISP, which is
> ~672Kb/s (although it's supposed to be 800kb/s
>
>> is different from line rate on the up or
>> down,
>
> Which I assume is the rate between the OpenWRT router and the DSL modem?
> If so, that's Gige, or at least that's what ethtool reports.  Given the
> age of the DSL modem though, I am surprised it really is Gige.  I'd
> think 100Mb/s at best.  Still, much faster than my provide rate.
>
>> you need to apply a rate shaper to the interface first, not just
>> fq_codel alone. You have to make your device be the bottleneck in
>> order to have control of the queue.
>
> So is that to say that I'm probably queuing in the bloated buffers of
> the DSL modem and so need to feed the DSL modem at the rate it's feeding
> upstream so as not to buffer there?
>
>> openwrt has the qos-scripts and luci-app-qos packages
>
> OK.  So having applied shaping to the upstream (and removing the default
> classifications that luci-app-qos applies so everything is treated
> equally), things look much better.  While saturating the uplink I got
> the following ping response times:
>
> 88 packets transmitted, 88 received, 0% packet loss, time 87116ms
> rtt min/avg/max/mdev = 11.924/25.266/96.005/10.823 ms
>
> Those scripts appear to be using hfsc.  Is that better/worse than the
> htb you suggested?

openwrt uses hfsc *+ fq_codel* . :)

sqm htb + fq_codel has explicit optimizations for dsl links,
compensating for ATM cell overhead It also handles ipv6 traffic
better.

The reason for cero's using htb rather than hfsc is that hfsc has it's
own unusual drop and scheduling mechanisms, and in order to evaluate
fq_codel more fully I chose htb to work with first - (and HTB turned
out to be broken in multiple ways, all fixed now).

I still, even after all this time, have no idea which is "better".
HFSC is quite common in the DSL/DSLAM world.

>> A simple example of how htb is used in the above
>>
>> http://wiki.gentoo.org/wiki/Traffic_shaping
>
> Interestingly on my other WAN connection, I don't see much bufferbloat.
> Given the following tc configuration:
>
> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 0: dev eth0.2 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc mq 0: dev wlan1 root
> qdisc mq 0: dev wlan0 root
> qdisc hfsc 1: dev pppoe-wan1 root refcnt 2 default 30
> qdisc fq_codel 100: dev pppoe-wan1 parent 1:10 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 200: dev pppoe-wan1 parent 1:20 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 300: dev pppoe-wan1 parent 1:30 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 400: dev pppoe-wan1 parent 1:40 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc ingress ffff: dev pppoe-wan1 parent ffff:fff1 ----------------
> qdisc fq_codel 0: dev tun0 root refcnt 2 limit 1024p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc hfsc 1: dev ifb0 root refcnt 2 default 30
> qdisc fq_codel 100: dev ifb0 parent 1:10 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 200: dev ifb0 parent 1:20 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 300: dev ifb0 parent 1:30 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
> qdisc fq_codel 400: dev ifb0 parent 1:40 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
>
> where the WAN interface is eth0.2 and has a provided rate of 55Mb/s down
> and 10Mb/s up, a ping while saturating the uplink:

I note that saturating an uplink or downlink takes significant time, I
generally run a 4 stream up 4 stream down test for 60 seconds to be
sure I've won.

Secondly most of the low end hardware starts having trouble keeping up
at 50+mbit with htb.

> 90 packets transmitted, 90 received, 0% packet loss, time 89118ms
> rtt min/avg/max/mdev = 6.308/23.977/65.538/13.880 ms

That variance is still quite high. does your dsl line do ATM encapsulation?

>
> Enabling the shaping on eth0.2 and repeating the test the ping times
> are:
>
> 100 packets transmitted, 100 received, 0% packet loss, time 99117ms
> rtt min/avg/max/mdev = 4.915/10.744/27.284/2.215 ms
>
> So certainly more stable, but the non-shaped ping times were not
> horrible.  Certainly not like on the choked down pppoe-wan1 interface.
>
> In any case, it all looks good now.  Makes sense about preventing
> queuing in the "on-site" equipment (DSL or cable modems) between my
> router and the provider though.  I guess if that equipment's buffers
> were not bloated I wouldn't need to do the shaping, is that right?

In the ideal case this code would migrate to the cmts/dslam cable/dsl modem.

That's already in progress for docsis 3.1 (using docsis-pie)

> Maybe the buffers in the cable modem are so bloated and that's why it
> doesn't suffer so badly?

You need to run tests for longer periods of time.

>
> Cheers,
> b.
>



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article



More information about the Codel mailing list