Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
From: Sebastian Moeller <moeller0@gmx.de>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: cerowrt-devel@lists.bufferbloat.net
Subject: Re: [Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build
Date: Sun, 29 Mar 2015 13:16:26 +0200	[thread overview]
Message-ID: <F1D65A00-F766-4221-93CC-E22F5157B643@gmx.de> (raw)
In-Reply-To: <B32194B3-F830-4A71-A4F7-E388FE0C07E6@gmail.com>

Hi Jonathan
On Mar 29, 2015, at 08:17 , Jonathan Morton <chromatix99@gmail.com> wrote:

> 
>> On 29 Mar, 2015, at 04:14, Sebastian Moeller <moeller0@gmx.de> wrote:
>> 
>> I do not think my measurements show that ingress handling via IFB is that costly (< 5% bandwidth), that avoiding it will help much.
> 
>> Also the current diffserv implementation also costs around 5% bandwidth.
> 
> That’s useful information.  I may be able to calibrate that against similar tests on other hardware.
> 
> But presumably, if you remove the ingress shaping completely, it can then handle full line rate downstream?  What’s the comparable overhead figure for that?  

Without further ado:

moeller@happy-horse:~/CODE/CeroWrtScripts> ./betterspeedtest.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./netperfrunner.sh -6 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./betterspeedtest.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4 ; ./netperfrunner.sh -4 -H netperf-eu.bufferbloat.net -t 150 -p netperf-eu.bufferbloat.net -n 4
2015-03-29 09:49:00 Testing against netperf-eu.bufferbloat.net (ipv6) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
 Download:  91.68 Mbps
  Latency: (in msec, 150 pings, 0.00% packet loss)
      Min: 39.600 
    10pct: 44.300 
   Median: 52.800 
      Avg: 53.230 
    90pct: 60.400 
      Max: 98.700
.......................................................................................................................................................
   Upload:  34.72 Mbps
  Latency: (in msec, 151 pings, 0.00% packet loss)
      Min: 39.400 
    10pct: 39.700 
   Median: 40.200 
      Avg: 44.311 
    90pct: 43.600 
      Max: 103.000
2015-03-29 09:54:01 Testing netperf-eu.bufferbloat.net (ipv6) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
 Download:  91.03 Mbps
   Upload:  8.79 Mbps
  Latency: (in msec, 151 pings, 0.00% packet loss)
      Min: 40.200 
    10pct: 45.900 
   Median: 53.100 
      Avg: 53.019 
    90pct: 59.900 
      Max: 80.100
2015-03-29 09:56:32 Testing against netperf-eu.bufferbloat.net (ipv4) with 4 simultaneous sessions while pinging netperf-eu.bufferbloat.net (150 seconds in each direction)
......................................................................................................................................................
 Download:  93.48 Mbps
  Latency: (in msec, 150 pings, 0.00% packet loss)
      Min: 51.900 
    10pct: 56.600 
   Median: 60.800 
      Avg: 62.473 
    90pct: 69.800 
      Max: 87.900
.......................................................................................................................................................
   Upload:  35.23 Mbps
  Latency: (in msec, 151 pings, 0.00% packet loss)
      Min: 51.900 
    10pct: 52.200 
   Median: 52.600 
      Avg: 65.873 
    90pct: 108.000 
      Max: 116.000
2015-03-29 10:01:33 Testing netperf-eu.bufferbloat.net (ipv4) with 4 streams down and up while pinging netperf-eu.bufferbloat.net. Takes about 150 seconds.
 Download:  93.2 Mbps
   Upload:  5.69 Mbps
  Latency: (in msec, 152 pings, 0.00% packet loss)
      Min: 51.900 
    10pct: 56.100 
   Median: 60.400 
      Avg: 61.568 
    90pct: 67.200 
      Max: 93.100


Note that I shaped the connection at upstream 95p%: 35844 of 37730 Kbps and downstream at 90%: 98407 of 109341 Kbps; the line has 16bytes per packet overlay and uses pppoe so the MTU is 2492 and the packet size itself is 1500 + 14 + 16 = 1530 (I am also not sure whether the 4 byte ethernet frame check sequence is transmitted and needs accounting for, so I just left it out), so with TCP over IPv4 adding 40 bytes overhead, and TCP over IPv6 adding 60 bytes.

So without SQM I expect:
IPv6: 
Upstream:	(((1500 - 8 - 40 -20) * 8) * (37730 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 35313.3 Kbps; measured: 35.23 Mbps
Downstream:	(((1500 - 8 - 40 -20) * 8) * (109341 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 102337.5 Kbps; measured: 91.68 Mbps (but this is known to be too optimistic, as DTAG currently subtracts ~7% for G.INP error correction at the BRAS level)

IPv4:
Upstream:	(((1500 - 8 - 20 -20) * 8) * (37730 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 35806.5  Kbps; measured: 35.23 Mbps
Downstream:	(((1500 - 8 - 20 -20) * 8) * (109341 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 103766.8 Kbps; measured: 91.68 (but this is known to be too optimistic, as DTAG currently subtracts ~7% for G.INP error correction at the BRAS level)

So the upstream throughput comes pretty close, but downstream is off, but do to the unknown G.INP “reservation”/BRAS throttle I do not have a good expectation what this value should be.


And with SQM I expect:
IPv6 (simplest.qos): 
Upstream:	(((1500 - 8 - 40 -20) * 8) * (35844 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 33548.1 Kbps; measured: 32.73 Mbps (dual egress); 32.86 Mbps (IFB ingress)
Downstream:	(((1500 - 8 - 40 -20) * 8) * (98407 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 92103.8 Kbps; measured: 85.35 Mbps (dual egress); 82.76 Mbps (IFB ingress)

IPv4 (simplest.qos):
Upstream:	(((1500 - 8 - 20 -20) * 8) * (35844 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 34016.7 Kbps; measured: 33.45 Mbps (dual egress); 33.45 Mbps (IFB ingress)
Downstream:	(((1500 - 8 - 20 -20) * 8) * (98407 * 1000) / ((1500 + 14 + 16) * 8)) / 1000 = 93390.2 Kbps; measured: 86.52 Mbps (dual egress); 84.06 Mbps (IFB ingress)

So with our shaper, we stay a bit short of the theoretical values, but the link was not totally quiet so I expect some loads compared o the theoretical values.

> You see, if we were to use a policer instead of ingress shaping, we’d not only be getting IFB and ingress Diffserv mangling out of the way, but HTB as well.

	But we still would run HTB for egress I assume, and the current results with policers Dave hinted at do not seem like good candidates for replacing shaping…

Best Regards
	Sebastian

> 
> - Jonathan Morton
> 


  reply	other threads:[~2015-03-29 11:16 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-23  0:24 Dave Taht
2015-03-23  0:31 ` Jonathan Morton
2015-03-23  1:10 ` Jonathan Morton
2015-03-23  1:18   ` Dave Taht
2015-03-23  1:34 ` Jonathan Morton
2015-03-23  1:45   ` David Lang
2015-03-23  2:00     ` Dave Taht
2015-03-23  2:10     ` Jonathan Morton
2015-03-23  2:15       ` Dave Taht
2015-03-23  2:18         ` Dave Taht
2015-03-23  6:09       ` Sebastian Moeller
2015-03-23 13:43         ` Jonathan Morton
2015-03-23 16:09           ` Sebastian Moeller
2015-03-24  0:00             ` Sebastian Moeller
2015-03-24  0:05               ` Dave Taht
2015-03-24  0:07                 ` Sebastian Moeller
2015-03-24  3:16               ` Jonathan Morton
2015-03-24  7:47                 ` Sebastian Moeller
2015-03-24  8:13                   ` Jonathan Morton
2015-03-24  8:46                     ` Sebastian Moeller
2015-03-29  1:14                     ` Sebastian Moeller
2015-03-29  6:17                       ` Jonathan Morton
2015-03-29 11:16                         ` Sebastian Moeller [this message]
2015-03-29 12:48                           ` Jonathan Morton
2015-03-29 14:16                             ` Sebastian Moeller
2015-03-29 15:13                               ` Jonathan Morton
2015-03-23 17:08       ` David Lang
2015-03-23 16:17 ` Sebastian Moeller
2015-03-23 16:27   ` Dave Taht
2015-03-23 17:07     ` David Lang
2015-03-23 18:16       ` Jonathan Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F1D65A00-F766-4221-93CC-E22F5157B643@gmx.de \
    --to=moeller0@gmx.de \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=chromatix99@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox