[Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build

Sebastian Moeller moeller0 at gmx.de
Sun Mar 29 07:16:26 EDT 2015


Hi Jonathan
On Mar 29, 2015, at 08:17 , Jonathan Morton <chromatix99 at gmail.com> wrote:

> 
>> On 29 Mar, 2015, at 04:14, Sebastian Moeller <moeller0 at gmx.de> wrote:
>> 
>> I do not think my measurements show that ingress handling via IFB is that costly (< 5% bandwidth), that avoiding it will help much.
> 
>> Also the current diffserv implementation also costs around 5% bandwidth.
> 
> That’s useful information.  I may be able to calibrate that against similar tests on other hardware.
> 
> But presumably, if you remove the ingress shaping completely, it can then handle full line rate downstream?  What’s the comparable overhead figure for that?  

Without further ado:

moeller at happy-horse:~/CODE/CeroWrtScripts> ./betterspeedtest.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./netperfrunner.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./betterspeedtest.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./netperfrunner.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4
2015-03-29 09:49:00 Testing against netperf-eu.bufferbloat.net (ipv6) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
 Download:  91.68 Mbps
  Latency: (in msec, 150 pings, 0.00% packet loss)
      Min: 39.600 
    10pct: 44.300 
   Median: 52.800 
      Avg: 53.230 
    90pct: 60.400 
      Max: 98.700
.......................................................................................................................................................
   Upload:  34.72 Mbps
  Latency: (in msec, 151 pings, 0.00% packet loss)
      Min: 39.400 
    10pct: 39.700 
   Median: 40.200 
      Avg: 44.311 
    90pct: 43.600 
      Max: 103.000
2015-03-29 09:54:01 Testing netperf-eu.bufferbloat.net (ipv6) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
 Download:  91.03 Mbps
   Upload:  8.79 Mbps
  Latency: (in msec, 151 pings, 0.00% packet loss)
      Min: 40.200 
    10pct: 45.900 
   Median: 53.100 
      Avg: 53.019 
    90pct: 59.900 
      Max: 80.100
2015-03-29 09:56:32 Testing against netperf-eu.bufferbloat.net (ipv4) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
 Download:  93.48 Mbps
  Latency: (in msec, 150 pings, 0.00% packet loss)
      Min: 51.900 
    10pct: 56.600 
   Median: 60.800 
      Avg: 62.473 
    90pct: 69.800 
      Max: 87.900
.......................................................................................................................................................
   Upload:  35.23 Mbps
  Latency: (in msec, 151 pings, 0.00% packet loss)
      Min: 51.900 
    10pct: 52.200 
   Median: 52.600 
      Avg: 65.873 
    90pct: 108.000 
      Max: 116.000
2015-03-29 10:01:33 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
 Download:  93.2 Mbps
   Upload:  5.69 Mbps
  Latency: (in msec, 152 pings, 0.00% packet loss)
      Min: 51.900 
    10pct: 56.100 
   Median: 60.400 
      Avg: 61.568 
    90pct: 67.200 
      Max: 93.100


Note that I shaped the connection at upstream 95p%: 35844 of 37730 Kbps and downstream at 90%: 98407 of 109341 Kbps; the line has 16bytes per packet overlay and uses pppoe so the MTU is 2492 and the packet size itself is 1500 + 14 + 16 = 1530 (I am also not sure whether the 4 byte ethernet frame check sequence is transmitted and needs accounting for, so I just left it out), so with TCP over IPv4 adding 40 bytes overhead, and TCP over IPv6 adding 60 bytes.

So without SQM I expect:
IPv6: 
Upstream:	(((1500 - 8 - 40 -20) * 8) * (37730 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 35313.3 Kbps; measured: 35.23 Mbps
Downstream:	(((1500 - 8 - 40 -20) * 8) * (109341 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 102337.5 Kbps; measured: 91.68 Mbps (but this is known to be too optimistic, as DTAG currently subtracts ~7% for G.INP error correction at the BRAS level)

IPv4:
Upstream:	(((1500 - 8 - 20 -20) * 8) * (37730 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 35806.5  Kbps; measured: 35.23 Mbps
Downstream:	(((1500 - 8 - 20 -20) * 8) * (109341 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 103766.8 Kbps; measured: 91.68 (but this is known to be too optimistic, as DTAG currently subtracts ~7% for G.INP error correction at the BRAS level)

So the upstream throughput comes pretty close, but downstream is off, but do to the unknown G.INP “reservation”/BRAS throttle I do not have a good expectation what this value should be.


And with SQM I expect:
IPv6 (simplest.qos): 
Upstream:	(((1500 - 8 - 40 -20) * 8) * (35844 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 33548.1 Kbps; measured: 32.73 Mbps (dual egress); 32.86 Mbps (IFB ingress)
Downstream:	(((1500 - 8 - 40 -20) * 8) * (98407 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 92103.8 Kbps; measured: 85.35 Mbps (dual egress); 82.76 Mbps (IFB ingress)

IPv4 (simplest.qos):
Upstream:	(((1500 - 8 - 20 -20) * 8) * (35844 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 34016.7 Kbps; measured: 33.45 Mbps (dual egress); 33.45 Mbps (IFB ingress)
Downstream:	(((1500 - 8 - 20 -20) * 8) * (98407 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 93390.2 Kbps; measured: 86.52 Mbps (dual egress); 84.06 Mbps (IFB ingress)

So with our shaper, we stay a bit short of the theoretical values, but the link was not totally quiet so I expect some loads compared o the theoretical values.

> You see, if we were to use a policer instead of ingress shaping, we’d not only be getting IFB and ingress Diffserv mangling out of the way, but HTB as well.

	But we still would run HTB for egress I assume, and the current results with policers Dave hinted at do not seem like good candidates for replacing shaping…

Best Regards
	Sebastian

> 
> - Jonathan Morton
> 




More information about the Cerowrt-devel mailing list