[Codel] usage of 'count' in codel

Dave Taht dave.taht at gmail.com
Sun May 6 14:41:56 EDT 2012


On Sun, May 6, 2012 at 11:25 AM, Simon Barber <simon at superduper.net> wrote:
> I was thinking the same thing (implement sfq+codel), I'm happy to refactor
> the codel code to be usable both standalone and with sfq, unless Eric is
> already doing it.

Don't know if he's had time yet... been a busy week!

> Another option would be pfifofast + codel - perhaps a
> candidate for the default qdisc?

PFIFO_FAST must die. :)

it has starvation issues with classification.

It's easy to simulate the net effects of fq by using qfq.
I have a hacked up version of my debloat script running.
It's handy, you can get it from my deBloat repo on github.

(I will check in the modified version on monday)

QMODEL=codel_qfq BINS=4 IFACE=eth0 QDEBUG=1 ./debloat

This is an example of a 4 bin qfq, with 60 netperfs running,
with the as yet unpublished current mod to codel (using
u32s for time rather than ktime, and change to the
count decrease to be by /2 rather than -1)

ordinarily you wouldn't run with so few bins.

But codel holds latency down on each queue below
16ms in this instance. However if you run it with tons of queues
it's rare to see count go above 1. (which is ok, and perhaps
additive decrease would be better....)

Having tons of fun here.

qdisc del dev eth0 root
qdisc add dev eth0 handle a root qfq
class add dev eth0 parent a classid a:5 qfq
qdisc add dev eth0 parent a:5 codel
class add dev eth0 parent a classid a:6 qfq
qdisc add dev eth0 parent a:6 codel
filter add dev eth0 parent a: protocol all prio 999 u32 match ip
protocol 0 0x00 flowid a:6
filter add dev eth0 parent a: protocol ip prio 5 u32 match u8 0x01
0x01 at -14 flowid a:5
filter add dev eth0 parent a: protocol ipv6 prio 6 u32 match u8 0x01
0x01 at -14 flowid a:5
filter add dev eth0 parent a: protocol arp prio 7 u32 match u8 0x01
0x01 at -14 flowid a:5
class add dev eth0 parent a: classid a:0 qfq
qdisc add dev eth0 parent a:0 codel
class add dev eth0 parent a: classid a:1 qfq
qdisc add dev eth0 parent a:1 codel
class add dev eth0 parent a: classid a:2 qfq
qdisc add dev eth0 parent a:2 codel
class add dev eth0 parent a: classid a:3 qfq
qdisc add dev eth0 parent a:3 codel
class add dev eth0 parent a: classid a:4 qfq
qdisc add dev eth0 parent a:4 codel
filter add dev eth0 parent a: handle 3 protocol all prio 97 flow hash
keys proto-dst,rxhash divisor 4


root at snapon:~/git/deBloat/src# $TC -s qdisc show dev eth0
qdisc qfq a: root refcnt 2
 Sent 716611181 bytes 474425 pkt (dropped 0, overlimits 0 requeues 226318)
 backlog 0b 133p requeues 226318
qdisc codel 8008: parent a:5 limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 6016 bytes 54 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  count 1 delay 0us
qdisc codel 8009: parent a:6 limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  count 1 delay 0us
qdisc codel 800a: parent a: limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  count 1 delay 0us
qdisc codel 800b: parent a:1 limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 179535592 bytes 118898 pkt (dropped 16386, overlimits 0 requeues 0)
 backlog 68130b 46p requeues 0
  count 1288 delay 18.3ms drop_next 1.6ms
qdisc codel 800c: parent a:2 limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 179027305 bytes 118482 pkt (dropped 12351, overlimits 0 requeues 0)
 backlog 40878b 28p requeues 0
  count 810 delay 9.8ms drop_next 1.1ms
qdisc codel 800d: parent a:3 limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 179037733 bytes 118485 pkt (dropped 5963, overlimits 0 requeues 0)
 backlog 27252b 19p requeues 0
  count 161 delay 8.6ms drop_next 4.7ms
qdisc codel 800e: parent a:4 limit 1000p minbytes 1514 target 5.0ms
interval 100.0ms
 Sent 179010591 bytes 118510 pkt (dropped 14502, overlimits 0 requeues 0)
 backlog 57532b 39p requeues 0
  count 643 delay 16.8ms drop_next 226us

On the same test with 1024 bins, utilization is good (and pings stay flat)

root at snapon:~/git/deBloat/src# $TC -s qdisc show dev eth0 | grep count
| grep -v 'delay 0us'
  count 1 delay 34.8ms drop_next 63.0ms
  count 1 delay 7.1ms drop_next 31.1ms
  count 1 delay 24.0ms
  count 1 delay 34.1ms
  count 1 delay 8.4ms drop_next 56.6ms
  count 1 delay 19.6ms
  count 1 delay 22.1ms
  count 1 delay 19.4ms drop_next 19.2ms
  count 1 delay 21.1ms
  count 1 delay 23.3ms
  count 1 delay 6.4ms drop_next 51.9ms
  count 1 delay 23.0ms
  count 1 delay 26.1ms
  count 1 delay 8.7ms drop_next 95.7ms
  count 1 delay 25.0ms
  count 1 delay 26.9ms
  count 1 delay 26.7ms
  count 1 delay 14.0ms drop_next 79.3ms
  count 1 delay 26.6ms
  count 1 delay 21.6ms
  count 1 delay 37.8ms
  count 1 delay 2.1ms drop_next 56.3ms
  count 1 delay 17.6ms drop_next 20.7ms
  count 1 delay 20.6ms drop_next 18.0ms
  count 1 delay 11.9ms drop_next 26.4ms
  count 1 delay 15.0ms drop_next 63.9ms
  count 1 delay 24.3ms drop_next 14.3ms
  count 1 delay 15.9ms drop_next 78.6ms
  count 1 delay 20.7ms
  count 1 delay 17.2ms drop_next 80.2ms
  count 1 delay 8.8ms drop_next 90.4ms
  count 1 delay 19.4ms drop_next 99.4ms
  count 1 delay 27.5ms drop_next 68.7ms
  count 1 delay 8.7ms drop_next 49.3ms
  count 1 delay 5.5ms drop_next 32.5ms
  count 1 delay 19.6ms
  count 1 delay 11.7ms drop_next 46.1ms
  count 1 delay 16.0ms drop_next 22.3ms
  count 1 delay 29.7ms
  count 1 delay 25.0ms drop_next 94.6ms
  count 1 delay 21.5ms drop_next 97.6ms
  count 1 delay 28.5ms
  count 1 delay 12.1ms drop_next 25.9ms
  count 1 delay 17.9ms drop_next 36.7ms
  count 1 delay 16.6ms drop_next 21.4ms
  count 1 delay 15.4ms drop_next 42.2ms
  count 1 delay 25.4ms drop_next 73.6ms
  count 1 delay 25.4ms
  count 1 delay 25.5ms drop_next 93.9ms
  count 1 delay 20.7ms drop_next 77.5ms
  count 1 delay 26.8ms
  count 1 delay 30.5ms
  count 1 delay 25.5ms
  count 1 delay 5.2ms drop_next 94.1ms
  count 1 delay 30.8ms drop_next 23.0ms
  count 1 delay 17.0ms drop_next 21.1ms
  count 1 delay 21.5ms
  count 1 delay 4.9ms drop_next 94.3ms
  count 1 delay 15.5ms drop_next 62.7ms
  count 1 delay 19.2ms

>
> Simon
>
>
> On 05/06/2012 10:58 AM, dave taht wrote:
>>
>> On 05/06/2012 10:20 AM, Jim Gettys wrote:
>>>
>>> On 05/06/2012 01:08 PM, Eric Dumazet wrote:
>>>>
>>>> On Sun, 2012-05-06 at 16:51 +0200, Eric Dumazet wrote:
>>>>>
>>>>> On Sun, 2012-05-06 at 07:46 +0200, Eric Dumazet wrote:
>>>>>>
>>>>>> With 60 netperfs
>>>>>>
>>>>>> Using :
>>>>>> c = min(q->count - 1, q->count - (q->count>>4));
>>>>>> q->count = max(1U, c);
>>>>>>
>>>>>> keeps count below 60000
>>>>>>
>>>>>> qdisc codel 10: dev eth9 parent 1:1 limit 1000p minbytes 1514
>>>>>> target 5.0ms interval 100.0ms
>>>>>> Sent 12813046129 bytes 8469317 pkt (dropped 575186, overlimits 0
>>>>>> requeues 0)
>>>>>> rate 202366Kbit 16708pps backlog 160484b 106p requeues 0
>>>>>> count 29048 delay 5.7ms drop_next 564us states 8334438 : 7880765
>>>>>> 574565 621
>>>>>>
>>>>>>
>>>>> I rewrote the time management to use usec resolution instead of nsec,
>>>>> and store it in a u32 field for practial reasons ( I would like to add
>>>>> codel on SFQ, so need to replicate the codel object XXX times in
>>>>> memory)
>>>>>
>>>>> More exactly I use 1024 nsec units (to avoid divides by 1000)
>>>>>
>>>>> And it seems I no longer have crazy q->count.
>>>>>
>>>>> Either there was a problem with the big/fat time comparaisons, either
>>>>> using an u32 triggers a wraparound every 4sec that cleanup the thing.
>>>>>
>>>>> More to come
>>>>
>>>> No it doesnt work if I have non responsive flows (UDP messages).
>>>>
>>>> My queue fills, delays are high...
>>>>
>>>> The time in queue idea is good (We discussed it already), but unlike
>>>> RED, codel is unable to force an upper limit (aggressive drops if
>>>> delays are way too high)
>>>
>>> You are presuming that codel is the only thing running; but that's not a
>>> good presumption. There may be/is expected to be other fair
>>> queuing/classification going on at the same time.
>>
>> I think codel is a very viable backend to an fq algo, and solves the
>> udp flood problem handily.
>>
>> and I figure eric is already halfway through ripping red out of
>> sfq and adding codel in its place. :)
>>
>> I've run it against qfq with good results, but more on that later.
>>
>>> An unreactive UDP flow can always cause trouble. Dropping packets can
>>> only be useful if the end points notice and do something about it.
>>
>> yes. udp foods are a problem in drop-tail too.
>>>
>>> I sort of think that having some upper queue time bound makes sense, if
>>> only to ensure that TCP's quadratic (un)responsiveness never gets out of
>>> hand.
>>
>> Well, we had to put in an upper packet limit anyway.
>>>
>>> But we agreed that having an implementation we could play with to
>>> observe reality would likely be better than hand-waving with no
>>> experience about how long the queue should be allowed to grow.
>>
>> yes.
>>
>> and it really is awesome in all the versions so far. What
>> eric did this morning is LOVELY.
>>
>>>
>>> And can we *please* get this discussion back on the mailing list?
>>
>>
>> Well, we were afraid we'd found something grievously wrong with codel
>> rather than our code, or had missed a 4th state in the machine
>>
>> All the same, count can grow out of control and something like this
>> seems to help, although it's not as smooth on the downside...
>>
>> /*
>> * if min went above target close to when we last went below it
>> * assume that the drop rate that controlled the queue on the
>> * last cycle is a good starting point to control it now.
>> */
>> if (codel_time_after(now - q->drop_next, 16 * q->interval)) {
>> // u32 c = min(q->count - 1, q->count - (q->count >> 4));
>> u32 c = q->count >> 1;
>> q->count = max(1U, c);
>>
>>
>> I figure eric will post to the mailing list before he crashes.
>>
>>> - Jim
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> Codel mailing list
>> Codel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/codel



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net



More information about the Codel mailing list