[Cerowrt-devel] cerowrt 3.3.8-17 is released

Sebastian Moeller moeller0 at gmx.de
Mon Aug 20 14:13:25 EDT 2012


Hi Dave,

thanks to your patch/instructions below I managed to figure out that, in my case, netalyzr's upstream buffering number depends on the size go the fq_codel limit:
fq_codel limit	reported upstream buffering		netalyzr flag
default (10k?)	2800ms 							(red)
10000			2800ms							(red)
1200			1300ms 							(red)
600				580ms							(yellow)
100				97ms							(green)

(I run 3.3.8-17 with 30000/4000 cable shaped to 29100/3880, 97% of line rate with the default QOS system)

With line rate at 90% the numbers increase slightly:
10000			2900ms							(red)
With line rate at 50% the numbers increase massively at the default codel limit:
10000			3900ms							(red)
1200			1300ms							(red)

these values are pretty reliable (with no inter-run variability at all; or better it looks like netalyzr quantizes the reported values and suppresses the variation). 

With 3.3.8-17 's simple_qos.sh (at 97% line rate I get)
600?				580ms						(yellow)

With neither simple_qos.sh nor QOS active after a reboot I get:
NA					490ms						(yellow)
NA					500ms						(yellow)
(lower than with any qos scheme down to estimated limit 500)


Now it looks that, at least in my setup, netalyzr still is able to fill the codel queue somehow (otherwise why would the reported buffering scale with fq_codel limit up to a ceiling?) 
The fq_codel bin the UPD test lands in reports that it is dropping packets (this is with limit 1200):
class fq_codel 400:d7 parent 400: 
 (dropped 3097, overlimits 0 requeues 0) 
 backlog 1292522b 1199p requeues 0 
  deficit 29 count 1403 lastcount 1 ldelay 1.2s dropping drop_next 1.3ms
… and the delay seems spot on with 1.2s or 1200ms. I have no better idea than in the short testing period given the non-responsiveness of the UDP test stream fq_codel simply does not drop enough packages to make a noticeable dent in the queued up packet load. Bback of envelope calculation: it takes around 2500 seconds to drop the first ~200 packets in a backlogged fq_codel flow, at netalyzr's ~1000 packets per second rate that leaves roughly 1000 * 2.5 - 200 = 2300 packets in the queue. Since netalyzr will adapt its UDP creation rate somewhat to the available bandwidth it might not be visible at typical DSL speeds…
	The nice thing about fq_codel is that other flows still stay responsive, which is pretty impressive.


QUESTION: how do I interpret the following tc output for a fq_codel flow:
class fq_codel 400:1b5 parent 400: 
 (dropped 3201, overlimits 0 requeues 0) 
 backlog 1292522b 1199p requeues 0 
  deficit -891 count 921 lastcount 1 ldelay 1.5s dropping drop_next -4.3ms

Why turned the drop_next negative?



best
	Sebastian



On Aug 15, 2012, at 9:58 PM, Dave Taht wrote:

> Firstly fq_codel will always stay very flat relative to your workload
> for sparse streamss such as a ping or voip dns or gaming...
> 
> It's good stuff.
> 
> And, I think the source of your 2.8 second thing is fq_codel's current
> reaction time, the non-responsiveness of the udp flooding netanylzer
> uses
> and huge default queue depth in openwrt's qos scripts.
> 
> Try this:
> 
> cero1 at snapon:~/src/Cerowrt-3.3.8/package/qos-scripts/files/usr/lib/qos$
> git diff tcrules.awk
> diff --git a/package/qos-scripts/files/usr/lib/qos/tcrules.awk
> b/package/qos-scripts/files/usr/lib/qos/tcrules
> index a19b651..f3e0d3f 100644
> --- a/package/qos-scripts/files/usr/lib/qos/tcrules.awk
> +++ b/package/qos-scripts/files/usr/lib/qos/tcrules.awk
> @@ -79,7 +79,7 @@ END {
>        # leaf qdisc
>        avpkt = 1200
>        for (i = 1; i <= n; i++) {
> -               print "tc qdisc add dev "device" parent 1:"class[i]"0
> handle "class[i]"00: fq_codel"
> +               print "tc qdisc add dev "device" parent 1:"class[i]"0
> handle "class[i]"00: fq_codel limit 1200
>        }
> 
>        # filter rule
> 
> 
> -- 
> Dave Täht
> http://www.bufferbloat.net/projects/cerowrt/wiki - "3.3.8-17 is out
> with fq_codel!"




More information about the Cerowrt-devel mailing list