From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-x230.google.com (mail-ob0-x230.google.com [IPv6:2607:f8b0:4003:c01::230]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 163F721F25D for ; Tue, 24 Jun 2014 11:20:48 -0700 (PDT) Received: by mail-ob0-f176.google.com with SMTP id wm4so776324obc.21 for ; Tue, 24 Jun 2014 11:20:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=OVOnP/zg9y96LG2UDHwFhm/63P3JSgFdMEnxobY2dts=; b=p8WUXtxr63DKQdfOdL/XhHi4uDiv2YfvsP9ToAkI9Km1Gk2HuBBdivBT3cK5o8SjMl xWj9FyKRlbSaxOBQ145nEkKkwlJ3AvxjY7vPaS5Yj3H5T7PkSfDT270SJ8NJrTkuXQZP APZ9tDpkLZnVG5mhhtwj3cd/E509b/PyELxI5AKVz3/UDpKM7zkNKRIGdnnPQ4kAATVp 2Qa2qi9CdwUHUbUQafOIHExydCegEhm/QIvzZckLzVwmth9rhvw9UiL4RfjV9a+3B4sP 8TkXNI+h7d3aBth7nvEwTkEo/sAvWngury53/J+PFCEf7Z30A6yKe40Uu93WIOWuuX1d wQQA== MIME-Version: 1.0 X-Received: by 10.60.39.166 with SMTP id q6mr2874428oek.20.1403634047052; Tue, 24 Jun 2014 11:20:47 -0700 (PDT) Received: by 10.202.48.200 with HTTP; Tue, 24 Jun 2014 11:20:46 -0700 (PDT) In-Reply-To: <1403630955.3478.767.camel@pc.interlinx.bc.ca> References: <1403616175.3478.731.camel@pc.interlinx.bc.ca> <1403630955.3478.767.camel@pc.interlinx.bc.ca> Date: Tue, 24 Jun 2014 11:20:46 -0700 Message-ID: From: Dave Taht To: "Brian J. Murrell" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: "codel@lists.bufferbloat.net" Subject: Re: [Codel] what am i doing wrong? why isn't codel working? X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Jun 2014 18:20:48 -0000 On Tue, Jun 24, 2014 at 10:29 AM, Brian J. Murrell wrote: > On Tue, 2014-06-24 at 09:26 -0700, Dave Taht wrote: >> Well, if your provided rate > > I assume this is the rate from my DSL modem to the ISP, which is > ~672Kb/s (although it's supposed to be 800kb/s > >> is different from line rate on the up or >> down, > > Which I assume is the rate between the OpenWRT router and the DSL modem? > If so, that's Gige, or at least that's what ethtool reports. Given the > age of the DSL modem though, I am surprised it really is Gige. I'd > think 100Mb/s at best. Still, much faster than my provide rate. > >> you need to apply a rate shaper to the interface first, not just >> fq_codel alone. You have to make your device be the bottleneck in >> order to have control of the queue. > > So is that to say that I'm probably queuing in the bloated buffers of > the DSL modem and so need to feed the DSL modem at the rate it's feeding > upstream so as not to buffer there? > >> openwrt has the qos-scripts and luci-app-qos packages > > OK. So having applied shaping to the upstream (and removing the default > classifications that luci-app-qos applies so everything is treated > equally), things look much better. While saturating the uplink I got > the following ping response times: > > 88 packets transmitted, 88 received, 0% packet loss, time 87116ms > rtt min/avg/max/mdev =3D 11.924/25.266/96.005/10.823 ms > > Those scripts appear to be using hfsc. Is that better/worse than the > htb you suggested? openwrt uses hfsc *+ fq_codel* . :) sqm htb + fq_codel has explicit optimizations for dsl links, compensating for ATM cell overhead It also handles ipv6 traffic better. The reason for cero's using htb rather than hfsc is that hfsc has it's own unusual drop and scheduling mechanisms, and in order to evaluate fq_codel more fully I chose htb to work with first - (and HTB turned out to be broken in multiple ways, all fixed now). I still, even after all this time, have no idea which is "better". HFSC is quite common in the DSL/DSLAM world. >> A simple example of how htb is used in the above >> >> http://wiki.gentoo.org/wiki/Traffic_shaping > > Interestingly on my other WAN connection, I don't see much bufferbloat. > Given the following tc configuration: > > qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024 quantum 3= 00 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 0: dev eth0.2 root refcnt 2 limit 1024p flows 1024 quantum= 300 target 5.0ms interval 100.0ms ecn > qdisc mq 0: dev wlan1 root > qdisc mq 0: dev wlan0 root > qdisc hfsc 1: dev pppoe-wan1 root refcnt 2 default 30 > qdisc fq_codel 100: dev pppoe-wan1 parent 1:10 limit 800p flows 1024 quan= tum 300 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 200: dev pppoe-wan1 parent 1:20 limit 800p flows 1024 quan= tum 300 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 300: dev pppoe-wan1 parent 1:30 limit 800p flows 1024 quan= tum 300 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 400: dev pppoe-wan1 parent 1:40 limit 800p flows 1024 quan= tum 300 target 5.0ms interval 100.0ms ecn > qdisc ingress ffff: dev pppoe-wan1 parent ffff:fff1 ---------------- > qdisc fq_codel 0: dev tun0 root refcnt 2 limit 1024p flows 1024 quantum 3= 00 target 5.0ms interval 100.0ms ecn > qdisc hfsc 1: dev ifb0 root refcnt 2 default 30 > qdisc fq_codel 100: dev ifb0 parent 1:10 limit 800p flows 1024 quantum 30= 0 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 200: dev ifb0 parent 1:20 limit 800p flows 1024 quantum 30= 0 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 300: dev ifb0 parent 1:30 limit 800p flows 1024 quantum 30= 0 target 5.0ms interval 100.0ms ecn > qdisc fq_codel 400: dev ifb0 parent 1:40 limit 800p flows 1024 quantum 30= 0 target 5.0ms interval 100.0ms ecn > > where the WAN interface is eth0.2 and has a provided rate of 55Mb/s down > and 10Mb/s up, a ping while saturating the uplink: I note that saturating an uplink or downlink takes significant time, I generally run a 4 stream up 4 stream down test for 60 seconds to be sure I've won. Secondly most of the low end hardware starts having trouble keeping up at 50+mbit with htb. > 90 packets transmitted, 90 received, 0% packet loss, time 89118ms > rtt min/avg/max/mdev =3D 6.308/23.977/65.538/13.880 ms That variance is still quite high. does your dsl line do ATM encapsulation? > > Enabling the shaping on eth0.2 and repeating the test the ping times > are: > > 100 packets transmitted, 100 received, 0% packet loss, time 99117ms > rtt min/avg/max/mdev =3D 4.915/10.744/27.284/2.215 ms > > So certainly more stable, but the non-shaped ping times were not > horrible. Certainly not like on the choked down pppoe-wan1 interface. > > In any case, it all looks good now. Makes sense about preventing > queuing in the "on-site" equipment (DSL or cable modems) between my > router and the provider though. I guess if that equipment's buffers > were not bloated I wouldn't need to do the shaping, is that right? In the ideal case this code would migrate to the cmts/dslam cable/dsl modem= . That's already in progress for docsis 3.1 (using docsis-pie) > Maybe the buffers in the cable modem are so bloated and that's why it > doesn't suffer so badly? You need to run tests for longer periods of time. > > Cheers, > b. > --=20 Dave T=C3=A4ht NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_= indecent.article