From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) by huchra.bufferbloat.net (Postfix) with ESMTP id 329E821F180 for ; Mon, 15 Jul 2013 12:59:36 -0700 (PDT) Received: from hms-beagle.home.lan ([79.229.236.119]) by mail.gmx.com (mrgmx001) with ESMTPSA (Nemesis) id 0LqhmM-1UUh761FMA-00eMKY for ; Mon, 15 Jul 2013 21:59:34 +0200 From: Sebastian Moeller Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Message-Id: <7F5C89C1-68CD-4B74-977B-19E130E0129A@gmx.de> Date: Mon, 15 Jul 2013 21:59:33 +0200 To: "cerowrt-devel@lists.bufferbloat.net" Mime-Version: 1.0 (Mac OS X Mail 6.5 \(1508\)) X-Mailer: Apple Mail (2.1508) X-Provags-ID: V03:K0:qi3e8ru5Kd5m2A5r+qpTfGvfzoyBrHBW/yuH8q/kRaNE7H0lKHJ buqxGny193LTSX2Ymb7WaPVP41VcAop09L75xPK5b9SoX7oKZK6rzcKN5N7LMKdnbpZ9LFw 280/UHf+H9++SB5SYZh1BjcvlxZS1uMp+BL6BP9iDRlpMYrPmMHb3gJA+MNcXQzjXZ5dbQQ pJvJ/aHTaFRyAri2CzOGg== Subject: [Cerowrt-devel] late report on 3.7.4-3 with ATM linklayer X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jul 2013 19:59:36 -0000 Hi Dave, hi group so finally I am getting around to repeat a few test with cerowrt on a = rather typical ADSL line. Using simple_qos.sh did not result in nice = ping time behavior under up- and downlink saturating loads, as I had = seen with earlier versions (like 1.5 years ago when I switched to an = docsis carrier). Even though I had actives the scripts provisions for = ATM and my PPPOE overhead. Still the occasional ping was delayed for = 400+ milliseconds (with unloaded ping time to the host of ~24 ms). The = following two changes to (the old) simple_qos.sh script helped a lot (by = somehow bounding the worst case ping at around 105 ms, still not great, = but so much better than before=85). Anyway here are the two additions: Note: I have no linux machine at home ATM so I do my measurements under = macosx and I have not managed to get netsurf-wrapper to work for me, so = all data are quite inscientifcly recorded (start a long ping train = against the nearest ISP-side host that reliably responds to my ping = probes and shows robust timing without load, then adding a big link = saturating file transfer to the net, then opening 99 browser tabs at = once). Finally my encapsulation ends up in a per packet overhead of 40 = bytes per packet (in addition to the ip header and all the rest). EGRESS_STAB_STRING=3D"stab mtu 2048 tsize 128 overhead 26 linklayer atm" INGRESS_STAB_STRING=3D"stab mtu 2048 tsize 128 overhead 40 linklayer = atm" (NOTE: mtu here is not MTU but an stab specific way of setting an upper = limit for the size table to be built and used) and here is where I put them: 1) in egress() c qdisc add dev $IFACE root handle 1: ${EGRESS_STAB_STRING} htb ${RTQ} = default 12=20 2) in ingress() tc qdisc add dev $DEV root handle 1: ${INGRESS_STAB_STRING} htb ${RTQ} = default 12=20 and in addition I set: UPLINK=3D2400 #2558 DOWNLINK=3D15582 #16402 DEV=3Difb0 QDISC=3Dnfq_codel # nfq_codel is higher over head than fq_codel but does = better on quantums. I hope. IFACE=3Dge00 DEPTH=3D42 TC=3D/usr/sbin/tc FLOWS=3D8000 PERTURB=3D"perturb 0" # Permutation is costly, disable FLOWS=3D16000 #=20 BQL_MAX=3D3000 # it is important to factor this into the RED calc CEIL=3D$UPLINK MTU=3D1500 ADSLL=3D"" PPOE=3D"" (using the same setting with empty [E|IN]GRESS_STAB_STRING variables but = PPOE=3D1 yielded the quite miserable maximal ping times of > 400ms). So my hunch is that the more general stab mechanism (which to this = layman seems to work not by fudging the rate in the kernel, but by = telling the kernel the actually sizes of the data packets (in the = linklayer) so the kernel shapes correctly) seems to have taken less = damage the original HTB internal mechanism of the same spirit. I will, = once I get round to it, that is, try to repeat these measurement with = the most recent cerowrt alpha build. I encourage everyone on an ATM link = to test the described modifications and report back success or failure.=20= Don't know whether your XDSL line uses ATM as link layer, don't = know your overhead? just let me know I might be able to help :). Most = VDSLs hopefully use packet transfer mode (PTM) which did away with the = ATM cell quantization voodoo, so only ADSL, ADSL2 and ADSL2+ user need = to worry about the link layer. The overhead however might also be an = issue with VDSL=85 Best Sebastian