From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id E642D21F28A for ; Sun, 29 Mar 2015 04:16:33 -0700 (PDT) Received: from hms-beagle.home.lan ([87.164.165.7]) by mail.gmx.com (mrgmx003) with ESMTPSA (Nemesis) id 0M7YF5-1ZMQCh2NZH-00xNu0; Sun, 29 Mar 2015 13:16:28 +0200 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) From: Sebastian Moeller In-Reply-To: Date: Sun, 29 Mar 2015 13:16:26 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <08BAF198-87C5-42B8-8899-53F34E47156E@gmail.com> <896FAE61-B45A-4F34-9449-5ADB82194DD9@gmx.de> <48350C2E-C33A-4534-84BC-5D56F4AAF0EA@gmail.com> <8AC58249-8199-405B-997A-E8F7285A34FB@gmx.de> <3615CBEA-5DC1-4722-9EE9-13B3570E8118@gmx.de> To: Jonathan Morton X-Mailer: Apple Mail (2.1878.6) X-Provags-ID: V03:K0:ui+9oflapnwt0A5TUeUo+1qBEW0luWd4gqxPUHqgVdHaa/EPoPT 0OWVz5XTreXSG13GcCp/WeSNQP7kVQ1iSZw6BhiN6VzeavsN3HUwoh0bNs5xSUAcbpjJkib qBFe6I5aCNfVeV7NNw1zsiRGmVMY+zNix1e7J9mlYV+kqQf6bMZz4Uu6eiaMRHeMbfUNF9R n6BpDsyJJQsMpLDct8vFQ== X-UI-Out-Filterresults: notjunk:1; Cc: cerowrt-devel@lists.bufferbloat.net Subject: Re: [Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Mar 2015 11:17:02 -0000 Hi Jonathan On Mar 29, 2015, at 08:17 , Jonathan Morton = wrote: >=20 >> On 29 Mar, 2015, at 04:14, Sebastian Moeller wrote: >>=20 >> I do not think my measurements show that ingress handling via IFB is = that costly (< 5% bandwidth), that avoiding it will help much. >=20 >> Also the current diffserv implementation also costs around 5% = bandwidth. >=20 > That=92s useful information. I may be able to calibrate that against = similar tests on other hardware. >=20 > But presumably, if you remove the ingress shaping completely, it can = then handle full line rate downstream? What=92s the comparable overhead = figure for that? =20 Without further ado: moeller@happy-horse:~/CODE/CeroWrtScripts> ./betterspeedtest.sh -6 -H = netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; = ./netperfrunner.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p = netperf-eu.bufferbloat.net -n 4 ; ./betterspeedtest.sh -4 -H = netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; = ./netperfrunner.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p = netperf-eu.bufferbloat.net -n 4 2015-03-29 09:49:00 Testing against netperf-eu.bufferbloat.net (ipv6) = with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net = (150 seconds in each direction) = ..........................................................................= ..........................................................................= .. Download: 91.68 Mbps Latency: (in msec, 150 pings, 0.00% packet loss) Min: 39.600=20 10pct: 44.300=20 Median: 52.800=20 Avg: 53.230=20 90pct: 60.400=20 Max: 98.700 = ..........................................................................= ..........................................................................= ... Upload: 34.72 Mbps Latency: (in msec, 151 pings, 0.00% packet loss) Min: 39.400=20 10pct: 39.700=20 Median: 40.200=20 Avg: 44.311=20 90pct: 43.600=20 Max: 103.000 2015-03-29 09:54:01 Testing netperf-eu.bufferbloat.net (ipv6) with 4 = streams down and up while pinging netperf-eu.bufferbloat.net. Takes = about 150 seconds. Download: 91.03 Mbps Upload: 8.79 Mbps Latency: (in msec, 151 pings, 0.00% packet loss) Min: 40.200=20 10pct: 45.900=20 Median: 53.100=20 Avg: 53.019=20 90pct: 59.900=20 Max: 80.100 2015-03-29 09:56:32 Testing against netperf-eu.bufferbloat.net (ipv4) = with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net = (150 seconds in each direction) = ..........................................................................= ..........................................................................= .. Download: 93.48 Mbps Latency: (in msec, 150 pings, 0.00% packet loss) Min: 51.900=20 10pct: 56.600=20 Median: 60.800=20 Avg: 62.473=20 90pct: 69.800=20 Max: 87.900 = ..........................................................................= ..........................................................................= ... Upload: 35.23 Mbps Latency: (in msec, 151 pings, 0.00% packet loss) Min: 51.900=20 10pct: 52.200=20 Median: 52.600=20 Avg: 65.873=20 90pct: 108.000=20 Max: 116.000 2015-03-29 10:01:33 Testing netperf-eu.bufferbloat.net (ipv4) with 4 = streams down and up while pinging netperf-eu.bufferbloat.net. Takes = about 150 seconds. Download: 93.2 Mbps Upload: 5.69 Mbps Latency: (in msec, 152 pings, 0.00% packet loss) Min: 51.900=20 10pct: 56.100=20 Median: 60.400=20 Avg: 61.568=20 90pct: 67.200=20 Max: 93.100 Note that I shaped the connection at upstream 95p%: 35844 of 37730 Kbps = and downstream at 90%: 98407 of 109341 Kbps; the line has 16bytes per = packet overlay and uses pppoe so the MTU is 2492 and the packet size = itself is 1500 + 14 + 16 =3D 1530 (I am also not sure whether the 4 byte = ethernet frame check sequence is transmitted and needs accounting for, = so I just left it out), so with TCP over IPv4 adding 40 bytes overhead, = and TCP over IPv6 adding 60 bytes. So without SQM I expect: IPv6:=20 Upstream: (((1500 - 8 - 40 -20) * 8) * (37730 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 35313.3 Kbps; measured: 35.23 Mbps Downstream: (((1500 - 8 - 40 -20) * 8) * (109341 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 102337.5 Kbps; measured: 91.68 Mbps (but this = is known to be too optimistic, as DTAG currently subtracts ~7% for G.INP = error correction at the BRAS level) IPv4: Upstream: (((1500 - 8 - 20 -20) * 8) * (37730 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 35806.5 Kbps; measured: 35.23 Mbps Downstream: (((1500 - 8 - 20 -20) * 8) * (109341 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 103766.8 Kbps; measured: 91.68 (but this is = known to be too optimistic, as DTAG currently subtracts ~7% for G.INP = error correction at the BRAS level) So the upstream throughput comes pretty close, but downstream is off, = but do to the unknown G.INP =93reservation=94/BRAS throttle I do not = have a good expectation what this value should be. And with SQM I expect: IPv6 (simplest.qos):=20 Upstream: (((1500 - 8 - 40 -20) * 8) * (35844 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 33548.1 Kbps; measured: 32.73 Mbps (dual = egress); 32.86 Mbps (IFB ingress) Downstream: (((1500 - 8 - 40 -20) * 8) * (98407 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 92103.8 Kbps; measured: 85.35 Mbps (dual = egress); 82.76 Mbps (IFB ingress) IPv4 (simplest.qos): Upstream: (((1500 - 8 - 20 -20) * 8) * (35844 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 34016.7 Kbps; measured: 33.45 Mbps (dual = egress); 33.45 Mbps (IFB ingress) Downstream: (((1500 - 8 - 20 -20) * 8) * (98407 * 1000) / ((1500 + = 14 + 16) * 8)) / 1000 =3D 93390.2 Kbps; measured: 86.52 Mbps (dual = egress); 84.06 Mbps (IFB ingress) So with our shaper, we stay a bit short of the theoretical values, but = the link was not totally quiet so I expect some loads compared o the = theoretical values. > You see, if we were to use a policer instead of ingress shaping, we=92d = not only be getting IFB and ingress Diffserv mangling out of the way, = but HTB as well. But we still would run HTB for egress I assume, and the current = results with policers Dave hinted at do not seem like good candidates = for replacing shaping=85 Best Regards Sebastian >=20 > - Jonathan Morton >=20