From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mailout-de.gmx.net (mailout-de.gmx.net [213.165.64.23]) by huchra.bufferbloat.net (Postfix) with SMTP id 21A8020221E for ; Mon, 20 Aug 2012 11:13:30 -0700 (PDT) Received: (qmail invoked by alias); 20 Aug 2012 18:13:29 -0000 Received: from tsaolab-fw.caltech.edu (EHLO [192.168.50.16]) [131.215.9.89] by mail.gmx.net (mp001) with SMTP; 20 Aug 2012 20:13:29 +0200 X-Authenticated: #24211782 X-Provags-ID: V01U2FsdGVkX18a7QJXfA0amYPnm+b/B6P843PFJRFyPidtKWaJn0 lz4cyMzb1IMwla Mime-Version: 1.0 (Apple Message framework v1278) Content-Type: text/plain; charset=windows-1252 From: Sebastian Moeller In-Reply-To: Date: Mon, 20 Aug 2012 11:13:25 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <36D61FDC-9AA9-46CC-ACBB-2D28B250C660@gmx.de> <1345071222.04317697@apps.rackspace.com> To: Dave Taht X-Mailer: Apple Mail (2.1278) X-Y-GMX-Trusted: 0 Cc: cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] cerowrt 3.3.8-17 is released X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Aug 2012 18:13:31 -0000 Hi Dave, thanks to your patch/instructions below I managed to figure out that, in = my case, netalyzr's upstream buffering number depends on the size go the = fq_codel limit: fq_codel limit reported upstream buffering netalyzr flag default (10k?) 2800ms = (red) 10000 2800ms = (red) 1200 1300ms = (red) 600 580ms = (yellow) 100 97ms = (green) (I run 3.3.8-17 with 30000/4000 cable shaped to 29100/3880, 97% of line = rate with the default QOS system) With line rate at 90% the numbers increase slightly: 10000 2900ms = (red) With line rate at 50% the numbers increase massively at the default = codel limit: 10000 3900ms = (red) 1200 1300ms = (red) these values are pretty reliable (with no inter-run variability at all; = or better it looks like netalyzr quantizes the reported values and = suppresses the variation).=20 With 3.3.8-17 's simple_qos.sh (at 97% line rate I get) 600? 580ms = (yellow) With neither simple_qos.sh nor QOS active after a reboot I get: NA 490ms = (yellow) NA 500ms = (yellow) (lower than with any qos scheme down to estimated limit 500) Now it looks that, at least in my setup, netalyzr still is able to fill = the codel queue somehow (otherwise why would the reported buffering = scale with fq_codel limit up to a ceiling?)=20 The fq_codel bin the UPD test lands in reports that it is dropping = packets (this is with limit 1200): class fq_codel 400:d7 parent 400:=20 (dropped 3097, overlimits 0 requeues 0)=20 backlog 1292522b 1199p requeues 0=20 deficit 29 count 1403 lastcount 1 ldelay 1.2s dropping drop_next 1.3ms =85 and the delay seems spot on with 1.2s or 1200ms. I have no better = idea than in the short testing period given the non-responsiveness of = the UDP test stream fq_codel simply does not drop enough packages to = make a noticeable dent in the queued up packet load. Bback of envelope = calculation: it takes around 2500 seconds to drop the first ~200 packets = in a backlogged fq_codel flow, at netalyzr's ~1000 packets per second = rate that leaves roughly 1000 * 2.5 - 200 =3D 2300 packets in the queue. = Since netalyzr will adapt its UDP creation rate somewhat to the = available bandwidth it might not be visible at typical DSL speeds=85 The nice thing about fq_codel is that other flows still stay = responsive, which is pretty impressive. QUESTION: how do I interpret the following tc output for a fq_codel = flow: class fq_codel 400:1b5 parent 400:=20 (dropped 3201, overlimits 0 requeues 0)=20 backlog 1292522b 1199p requeues 0=20 deficit -891 count 921 lastcount 1 ldelay 1.5s dropping drop_next = -4.3ms Why turned the drop_next negative? best Sebastian On Aug 15, 2012, at 9:58 PM, Dave Taht wrote: > Firstly fq_codel will always stay very flat relative to your workload > for sparse streamss such as a ping or voip dns or gaming... >=20 > It's good stuff. >=20 > And, I think the source of your 2.8 second thing is fq_codel's current > reaction time, the non-responsiveness of the udp flooding netanylzer > uses > and huge default queue depth in openwrt's qos scripts. >=20 > Try this: >=20 > = cero1@snapon:~/src/Cerowrt-3.3.8/package/qos-scripts/files/usr/lib/qos$ > git diff tcrules.awk > diff --git a/package/qos-scripts/files/usr/lib/qos/tcrules.awk > b/package/qos-scripts/files/usr/lib/qos/tcrules > index a19b651..f3e0d3f 100644 > --- a/package/qos-scripts/files/usr/lib/qos/tcrules.awk > +++ b/package/qos-scripts/files/usr/lib/qos/tcrules.awk > @@ -79,7 +79,7 @@ END { > # leaf qdisc > avpkt =3D 1200 > for (i =3D 1; i <=3D n; i++) { > - print "tc qdisc add dev "device" parent 1:"class[i]"0 > handle "class[i]"00: fq_codel" > + print "tc qdisc add dev "device" parent 1:"class[i]"0 > handle "class[i]"00: fq_codel limit 1200 > } >=20 > # filter rule >=20 >=20 > --=20 > Dave T=E4ht > http://www.bufferbloat.net/projects/cerowrt/wiki - "3.3.8-17 is out > with fq_codel!"