[Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build
Sebastian Moeller
moeller0 at gmx.de
Sat Mar 28 21:14:58 EDT 2015
Hi Jonathan,
TL;DR: I do not think my measurements show that ingress handling via IFB is that costly (< 5% bandwidth), that avoiding it will help much. I do not think that conclusion will change much if more data is acquired (and I do not intend to collect more ;) ) Also the current diffserv implementation also costs around 5% bandwidth. Bandwidth cost here means that the total bandwidth at full saturation is reduced by that amount (with the total under load being ~90Mbps up+down while the sum of max bandwidth up+down without load ~115Mbps), latency under load seems not to suffer significantly once the router runs out of CPU though, which is nice ;)
On Mar 24, 2015, at 09:13 , Jonathan Morton <chromatix99 at gmail.com> wrote:
> What I'm seeing on your first tests is that double egress gives you slightly more download at the expense of slightly less upload throughout. The aggregate is higher.
>
> Your second set of tests tells me almost nothing, because it exercises the upload more and the download less. Hence why I'm asking for effectively the opposite test.
Since netperf-wrapper currently does not have rrul_down_only and rrul_up_only tests (and I have not enough tome/skill to code these tests) I opted for using Rich Brown’s nice wrapper scripts from the Ceroscripts repository. There still is decent report of the “fate” of the concurrent ICMP probe, but no fancy graphs or sparse UDP streams, but for our question this should be sufficient...
Here is the result for dual egress with simplest.qos from a client connected to cerowrt’s se00:
simplest.qos: IPv6, download and upload sequentially:
moeller at happy-horse:~/CODE/CeroWrtScripts> ./betterspeedtest.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4
2015-03-29 00:23:27 Testing against netperf-eu.bufferbloat.net (ipv6) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 85.35 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 37.900
10pct: 38.600
Median: 40.100
Avg: 40.589
90pct: 43.600
Max: 47.500
......................................................................................................................................................
Upload: 32.73 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 37.300
10pct: 37.800
Median: 38.200
Avg: 38.513
90pct: 39.400
Max: 47.400
simplest.qos: IPv6, download and upload simultaneously:
moeller at happy-horse:~/CODE/CeroWrtScripts> ./netperfrunner.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4
2015-03-29 00:30:40 Testing netperf-eu.bufferbloat.net (ipv6) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 81.42 Mbps
Upload: 9.33 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 37.500
10pct: 38.700
Median: 39.500
Avg: 39.997
90pct: 42.000
Max: 45.500
simplest.qos: IPv4, download and upload sequentially:
moeller at happy-horse:~/CODE/CeroWrtScripts> ./betterspeedtest.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./netperfrunner.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4
2015-03-29 00:33:52 Testing against netperf-eu.bufferbloat.net (ipv4) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 86.52 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 49.300
10pct: 50.300
Median: 51.400
Avg: 51.463
90pct: 52.700
Max: 54.500
......................................................................................................................................................
Upload: 33.45 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 49.300
10pct: 49.800
Median: 50.100
Avg: 50.161
90pct: 50.600
Max: 52.400
simplest.qos: IPv4, download and upload simultaneously:
2015-03-29 00:38:53 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 84.21 Mbps
Upload: 6.45 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 49.300
10pct: 50.000
Median: 51.100
Avg: 51.302
90pct: 52.700
Max: 56.100
The IPv6 route to Sweden is still 10ms shorter than the IPv4 one, no idea why, but I do not complain ;)
And again the same with simple.qos (in both directions) to assess the cost of our diffserv implementation:
simple.qos: IPv6, download and upload sequentially:
2015-03-29 00:44:06 Testing against netperf-eu.bufferbloat.net (ipv6) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 82.8 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 37.600
10pct: 38.500
Median: 39.600
Avg: 40.068
90pct: 42.600
Max: 47.900
......................................................................................................................................................
Upload: 32.8 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 37.300
10pct: 37.700
Median: 38.100
Avg: 38.256
90pct: 38.700
Max: 43.400
Compared to simplest.qos we loose like 2.5Mbs in downlink, not too bad, and nothing in the uplink tests, but there the wndr3700v2 is not yet out of oomph...
simple.qos: IPv6, download and upload simultaneously:
2015-03-29 00:49:07 Testing netperf-eu.bufferbloat.net (ipv6) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 77.2 Mbps
Upload: 9.43 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 37.800
10pct: 38.500
Median: 39.500
Avg: 40.133
90pct: 42.200
Max: 51.500
But here in the full saturating case we already “pay” 4Mbps, still not bad, but it looks like all the small change adds up (and I should try to find the time to look at all the tc filtering we do)
simple.qos: IPv4, download and upload sequentially:
2015-03-29 00:51:37 Testing against netperf-eu.bufferbloat.net (ipv4) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 84.28 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 49.400
10pct: 50.100
Median: 51.100
Avg: 51.253
90pct: 52.500
Max: 54.900
......................................................................................................................................................
Upload: 33.42 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 49.400
10pct: 49.800
Median: 50.100
Avg: 50.170
90pct: 50.600
Max: 51.800
simple.qos: IPv4, download and upload simultaneously:
2015-03-29 00:56:38 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 81.08 Mbps
Upload: 6.73 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 49.300
10pct: 50.100
Median: 51.100
Avg: 51.234
90pct: 52.500
Max: 56.200
Again this hammers the real upload while leaving the download mainly intact
And for fun/completeness’ sake of it the same for the standard setup with activated IFB and ingress shaping on pppoe-ge00:
simplest.qos: IPv6, download and upload sequentially:
2015-03-29 01:18:13 Testing against netperf-eu.bufferbloat.net (ipv6) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 82.76 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 37.800
10pct: 38.900
Median: 40.100
Avg: 40.590
90pct: 43.200
Max: 47.000
......................................................................................................................................................
Upload: 32.86 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 37.300
10pct: 37.700
Median: 38.100
Avg: 38.273
90pct: 38.700
Max: 43.200
So 85.35-82.76 = 2.59 Mbps cost for the IFB, comparable to the cost of our diffserv implementation.
simplest.qos: IPv6, download and upload simultaneously:
2015-03-29 01:23:14 Testing netperf-eu.bufferbloat.net (ipv6) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 78.61 Mbps
Upload: 10.53 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 37.700
10pct: 39.100
Median: 40.200
Avg: 40.509
90pct: 42.400
Max: 46.200
Weird, IFB still costs 81.42-78.61 = 2.81Mbs, but the uplink improved by 10.53-9.33 = 1.2 Mbps, reducing the IFB cost to 1.61Mbps...
simplest.qos: IPv4, download and upload sequentially:
2015-03-29 01:25:44 Testing against netperf-eu.bufferbloat.net (ipv4) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 84.06 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 49.700
10pct: 50.500
Median: 51.900
Avg: 51.866
90pct: 53.100
Max: 55.400
......................................................................................................................................................
Upload: 33.45 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 49.500
10pct: 49.800
Median: 50.200
Avg: 50.219
90pct: 50.600
Max: 52.100
Again IFB usage costs us 86.52-84.06 = 2.46 Mbps...
simplest.qos: IPv4, download and upload simultaneously:
2015-03-29 01:30:45 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 78.97 Mbps
Upload: 8.14 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 49.300
10pct: 50.300
Median: 51.500
Avg: 51.906
90pct: 53.700
Max: 71.700
And again download IFB cost 84.21-78.97 = 5.24, with upload recovery 8.14-6.45 = 1.69Mbps, so total IFB cost here 5.24-1.69 = 3.55 Mbps or (100*3.55 /(78.97+8.14)) = 4.1%, not great, but certainly not enough if regained to qualify this hardware for higher bandwidth tiers. Latency Max got noticeable worse, but up to 90pct the increase is just a few milliseconds...
And for simple.qos with IFB-based ingress shaping:
simple.qos: IPv6, download and upload sequentially:
2015-03-29 01:49:36 Testing against netperf-eu.bufferbloat.net (ipv6) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 80.24 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 37.700
10pct: 38.500
Median: 39.700
Avg: 40.285
90pct: 42.700
Max: 46.500
......................................................................................................................................................
Upload: 32.66 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 39.700
10pct: 40.300
Median: 40.600
Avg: 40.694
90pct: 41.200
Max: 43.200
IFB+diffserv cost: 85.35-80.24 = 5.11Mbps download only, upload is not CPU bound and hence not affected...
simple.qos: IPv6, download and upload simultaneously:
2015-03-29 01:54:37 Testing netperf-eu.bufferbloat.net (ipv6) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 73.68 Mbps
Upload: 10.32 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 40.300
10pct: 41.300
Median: 42.400
Avg: 42.904
90pct: 45.500
Max: 50.800
IFB+diffserv cost = (81.42 - 73.68) + ( 9.33 -10.32) = 6.75Mbps or 100*6.75/(73.68+10.32) = 8%, still not enough to qualify the wndr3700 to the nominally 100/40 Mbps tier I am testing, but enough to warrant looking at improving diffserv (or better yet switching to cake?)
simple.qos: IPv4, download and upload sequentially:
2015-03-29 01:57:07 Testing against netperf-eu.bufferbloat.net (ipv4) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
Download: 82.3 Mbps
Latency: (in msec, 150 pings, 0.00% packet loss)
Min: 51.900
10pct: 52.800
Median: 53.700
Avg: 53.922
90pct: 55.000
Max: 60.300
......................................................................................................................................................
Upload: 33.43 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 51.800
10pct: 52.300
Median: 52.600
Avg: 52.657
90pct: 53.000
Max: 54.200
IFB+diffserv cost: 86.52-82.3 = 4.22 Mbps download only, upload is not CPU bound and hence not affected...
simple.qos: IPv4, download and upload simultaneously:
2015-03-29 03:02:08 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
Download: 76.71 Mbps
Upload: 7.94 Mbps
Latency: (in msec, 151 pings, 0.00% packet loss)
Min: 51.700
10pct: 52.500
Median: 53.900
Avg: 54.145
90pct: 56.100
Max: 59.700
IFB+diffserv cost = (81.42 - 76.71) + ( 9.33 -7.94) = 6.1 Mbps or 100*6.1/(76.71+7.94) = 7.2%.
> The aggregate is still significantly higher with double egress, though.
>
> The ping numbers also tell me that there's no significant latency penalty either way. Even when CPU saturated, it's still effectively controlling the latency better than leaving the pipe open.
Yes, that is a pretty nice degradation mode. Now if the upload would to have to bear the brunt of the lacking CPU cycles…
Best Regards
Sebastian
>
> - Jonathan Morton
More information about the Cerowrt-devel
mailing list