[Make-wifi-fast] less latency, more filling... for wifi
Bob McMahon
bob.mcmahon at broadcom.com
Fri Oct 13 21:46:23 EDT 2017
FYI, here's an example output using two brix PCs over their wired ethernet
connected by a Cisco SG300 switch. Note: the ptpd stats (not shown)
displays the clock corrections and suggests them to be within 10
microseconds.
[rjmcmahon at rjm-fedora etc]$ iperf -c 192.168.100.33 -u -e -t 2 -b 2kpps
------------------------------------------------------------
Client connecting to 192.168.100.33, UDP port 5001 with pid 29952
Sending 1470 byte datagrams, IPG target: 500.00 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.67 port 50062 connected with 192.168.100.33 port
5001
[ ID] Interval Transfer Bandwidth PPS
[ 3] 0.00-2.00 sec 5.61 MBytes 23.5 Mbits/sec 1999 pps
[ 3] Sent 4001 datagrams
[ 3] Server Report:
[ 3] 0.00-2.00 sec 5.61 MBytes 23.5 Mbits/sec 0.012 ms 0/ 4001 (0%)
-/-/-/- ms 2000 pps
[rjmcmahon at hera ~]$ iperf -s -u -e -i 0.1
------------------------------------------------------------
Server listening on UDP port 5001 with pid 5178
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.33 port 5001 connected with 192.168.100.67 port
57325
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Latency avg/min/max/stdev PPS
[ 3] 0.00-0.10 sec 289 KBytes 23.6 Mbits/sec 0.013 ms 0/ 201
(0%) 0.142/ 0.018/ 0.192/ 0.025 ms 2005 pps
[ 3] 0.10-0.20 sec 287 KBytes 23.5 Mbits/sec 0.020 ms 0/ 200
(0%) 0.157/ 0.101/ 0.207/ 0.015 ms 1999 pps
[ 3] 0.20-0.30 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200
(0%) 0.155/ 0.071/ 0.212/ 0.018 ms 2002 pps
[ 3] 0.30-0.40 sec 287 KBytes 23.5 Mbits/sec 0.018 ms 0/ 200
(0%) 0.146/-0.007/ 0.187/ 0.018 ms 1999 pps
[ 3] 0.40-0.50 sec 287 KBytes 23.5 Mbits/sec 0.021 ms 0/ 200
(0%) 0.151/ 0.021/ 0.208/ 0.018 ms 2000 pps
[ 3] 0.50-0.60 sec 287 KBytes 23.5 Mbits/sec 0.016 ms 0/ 200
(0%) 0.148/ 0.043/ 0.192/ 0.018 ms 2000 pps
[ 3] 0.60-0.70 sec 287 KBytes 23.5 Mbits/sec 0.019 ms 0/ 200
(0%) 0.152/ 0.041/ 0.199/ 0.018 ms 2001 pps
[ 3] 0.70-0.80 sec 287 KBytes 23.5 Mbits/sec 0.016 ms 0/ 200
(0%) 0.144/ 0.071/ 0.206/ 0.017 ms 2001 pps
[ 3] 0.80-0.90 sec 287 KBytes 23.5 Mbits/sec 0.015 ms 0/ 200
(0%) 0.140/ 0.111/ 0.186/ 0.014 ms 1999 pps
[ 3] 0.90-1.00 sec 287 KBytes 23.5 Mbits/sec 0.022 ms 0/ 200
(0%) 0.154/ 0.111/ 0.222/ 0.019 ms 2000 pps
[ 3] 1.00-1.10 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200
(0%) 0.152/ 0.036/ 0.197/ 0.017 ms 2000 pps
[ 3] 1.10-1.20 sec 287 KBytes 23.5 Mbits/sec 0.015 ms 0/ 200
(0%) 0.153/-0.007/ 0.186/ 0.020 ms 2001 pps
[ 3] 1.20-1.30 sec 287 KBytes 23.5 Mbits/sec 0.013 ms 0/ 200
(0%) 0.149/ 0.035/ 0.207/ 0.018 ms 2000 pps
[ 3] 1.30-1.40 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200
(0%) 0.160/ 0.116/ 0.233/ 0.018 ms 2000 pps
[ 3] 1.40-1.50 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200
(0%) 0.159/ 0.122/ 0.207/ 0.015 ms 2000 pps
[ 3] 1.50-1.60 sec 287 KBytes 23.5 Mbits/sec 0.016 ms 0/ 200
(0%) 0.158/ 0.066/ 0.201/ 0.015 ms 1999 pps
[ 3] 1.60-1.70 sec 287 KBytes 23.5 Mbits/sec 0.017 ms 0/ 200
(0%) 0.162/ 0.076/ 0.203/ 0.016 ms 2000 pps
[ 3] 1.70-1.80 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200
(0%) 0.154/ 0.073/ 0.195/ 0.016 ms 2002 pps
[ 3] 1.80-1.90 sec 287 KBytes 23.5 Mbits/sec 0.015 ms 0/ 200
(0%) 0.154/ 0.113/ 0.213/ 0.017 ms 1999 pps
[ 3] 1.90-2.00 sec 287 KBytes 23.5 Mbits/sec 0.019 ms 0/ 200
(0%) 0.152/ 0.124/ 0.208/ 0.016 ms 1999 pps
[ 3] 0.00-2.00 sec 5.61 MBytes 23.5 Mbits/sec 0.019 ms 0/ 4001
(0%) 0.151/-0.007/ 0.233/ 0.018 ms 2000 pps
Here's a full line rate (1Gbs ethernet) run
[rjmcmahon at rjm-fedora etc]$ iperf -c 192.168.100.33 -u -e -t 2 -b 200kpss
-i 1 -w 2M
------------------------------------------------------------
Client connecting to 192.168.100.33, UDP port 5001 with pid 30626
Sending 1470 byte datagrams, IPG target: 5.00 us (kalman adjust)
UDP buffer size: 416 KByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.100.67 port 57349 connected with 192.168.100.33 port
5001
[ ID] Interval Transfer Bandwidth PPS
[ 3] 0.00-1.00 sec 114 MBytes 958 Mbits/sec 81421 pps
[ 3] 1.00-2.00 sec 114 MBytes 958 Mbits/sec 81380 pps
[ 3] 0.00-2.00 sec 228 MBytes 957 Mbits/sec 81399 pps
[ 3] Sent 162874 datagrams
[ 3] Server Report:
[ 3] 0.00-2.00 sec 228 MBytes 957 Mbits/sec 0.084 ms 0/162874
(0%) 1.641/ 0.253/ 2.466/ 0.114 ms 81383 pps
[ 3] local 192.168.100.33 port 5001 connected with 192.168.100.67 port
57349
[ 3] 0.00-0.10 sec 11.4 MBytes 960 Mbits/sec 0.016 ms 0/ 8166
(0%) 1.615/ 0.253/ 2.309/ 0.355 ms 81606 pps
[ 3] 0.10-0.20 sec 11.4 MBytes 956 Mbits/sec 0.016 ms 0/ 8133
(0%) 1.657/ 0.936/ 2.325/ 0.348 ms 81350 pps
[ 3] 0.20-0.30 sec 11.4 MBytes 957 Mbits/sec 0.021 ms 0/ 8139
(0%) 1.657/ 0.950/ 2.400/ 0.348 ms 81383 pps
[ 3] 0.30-0.40 sec 11.4 MBytes 958 Mbits/sec 0.016 ms 0/ 8144
(0%) 1.652/ 0.953/ 2.357/ 0.342 ms 81380 pps
[ 3] 0.40-0.50 sec 11.4 MBytes 956 Mbits/sec 0.069 ms 0/ 8131
(0%) 1.644/ 0.947/ 2.368/ 0.341 ms 81384 pps
[ 3] 0.50-0.60 sec 11.4 MBytes 957 Mbits/sec 0.072 ms 0/ 8138
(0%) 1.649/ 0.949/ 2.381/ 0.337 ms 81372 pps
[ 3] 0.60-0.70 sec 11.4 MBytes 957 Mbits/sec 0.025 ms 0/ 8139
(0%) 1.639/ 0.952/ 2.357/ 0.342 ms 81383 pps
[ 3] 0.70-0.80 sec 11.4 MBytes 957 Mbits/sec 0.014 ms 0/ 8135
(0%) 1.643/ 0.944/ 2.368/ 0.343 ms 81390 pps
[ 3] 0.80-0.90 sec 11.4 MBytes 957 Mbits/sec 0.016 ms 0/ 8142
(0%) 1.639/ 0.946/ 2.361/ 0.341 ms 81366 pps
[ 3] 0.90-1.00 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8142
(0%) 1.635/ 0.932/ 2.378/ 0.342 ms 81387 pps
[ 3] 1.00-1.10 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8138
(0%) 1.633/ 0.934/ 2.359/ 0.341 ms 81373 pps
[ 3] 1.10-1.20 sec 11.4 MBytes 957 Mbits/sec 0.010 ms 0/ 8135
(0%) 1.636/ 0.947/ 2.361/ 0.342 ms 81444 pps
[ 3] 1.20-1.30 sec 11.4 MBytes 957 Mbits/sec 0.091 ms 0/ 8140
(0%) 1.624/ 0.908/ 2.363/ 0.354 ms 81400 pps
[ 3] 1.30-1.40 sec 11.4 MBytes 956 Mbits/sec 0.016 ms 0/ 8133
(0%) 1.616/ 0.917/ 2.325/ 0.345 ms 81296 pps
[ 3] 1.40-1.50 sec 11.4 MBytes 957 Mbits/sec 0.012 ms 0/ 8138
(0%) 1.626/ 0.918/ 2.361/ 0.346 ms 81414 pps
[ 3] 1.50-1.60 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8136
(0%) 1.626/ 0.934/ 2.352/ 0.339 ms 81339 pps
[ 3] 1.60-1.70 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8141
(0%) 1.633/ 0.930/ 2.351/ 0.341 ms 81376 pps
[ 3] 1.70-1.80 sec 11.4 MBytes 956 Mbits/sec 0.017 ms 0/ 8133
(0%) 1.627/ 0.929/ 2.354/ 0.339 ms 81377 pps
[ 3] 1.80-1.90 sec 11.4 MBytes 957 Mbits/sec 0.088 ms 0/ 8139
(0%) 1.659/ 0.895/ 2.396/ 0.343 ms 81330 pps
[ 3] 1.90-2.00 sec 11.4 MBytes 956 Mbits/sec 0.013 ms 0/ 8128
(0%) 1.709/ 0.996/ 2.466/ 0.342 ms 81348 pps
[ 3] 0.00-2.00 sec 228 MBytes 957 Mbits/sec 0.085 ms 0/162874
(0%) 1.641/ 0.253/ 2.466/ 0.344 ms 81383 pps
I'll be testing with 802.11ax chips soon but probably can't share those
numbers. Sorry about that.
Bob
On Fri, Oct 13, 2017 at 12:41 PM, Bob McMahon <bob.mcmahon at broadcom.com>
wrote:
> PS. Thanks for writing flent and making it available. I'm a novice
> w/flent but do plan to learn it.
>
> Bob
>
> On Fri, Oct 13, 2017 at 11:47 AM, Bob McMahon <bob.mcmahon at broadcom.com>
> wrote:
>
>> Hi Toke,
>>
>> The other thing that will cause the server thread(s) and listener thread
>> to stop is -t when applied to the *server*, i.e. iperf -s -u -t 10 will
>> cause a 10 second timeout for the server/listener thread(s) life. Some
>> people don't want the Listener to stop so when -D (daemon) is applied, the
>> -t will only terminate server trafffic threads. Many people asked for
>> this because they wanted a way to time bound these threads, specifically
>> over the life of many tests.
>>
>> Yeah, summing is a bit of a mess. I've some proto code I've been playing
>> with but still not sure what is going to be released.
>>
>> For UDP, the source port must be unique per the quintuple (ip proto/src
>> ip/ src port/ dst ip/ dst port). Since the UDP server is merely waiting
>> for packets it doesn't have an knowledge about how to group. So it groups
>> based upon time, i.e. when a new traffic shows up it's put an existing
>> active group for summing.
>>
>> I'm not sure a good way to fix this. I think the client would have to
>> modify the payload, and per a -P tell the server the udp src ports that
>> belong in the same group. Then the server could assign groups based upon a
>> key in the payload.
>>
>> Thoughts and comments welcome,
>> Bob
>>
>> On Fri, Oct 13, 2017 at 2:28 AM, Toke Høiland-Jørgensen <toke at toke.dk>
>> wrote:
>>
>>> Bob McMahon <bob.mcmahon at broadcom.com> writes:
>>>
>>> > Thanks Toke. Let me look into this. Is there packet loss during your
>>> > tests? Can you share the output of the client and server per the error
>>> > scenario?
>>>
>>> Yeah, there's definitely packet loss.
>>>
>>> > With iperf 2 there is no TCP test exchange rather UDP test information
>>> > is derived from packets in flight. The server determines a UDP test is
>>> > finished by detecting a negative sequence number in the payload. In
>>> > theory, this should separate UDP tests. The server detects a new UDP
>>> > stream is by receiving a packet from a new source socket. If the
>>> > packet carrying the negative sequence number is lost then summing
>>> > across "tests" would be expected (even though not desired) per the
>>> > current design and implementation. We intentionally left this as is as
>>> > we didn't want to change the startup behavior nor require the network
>>> > support TCP connections in order to run a UDP test.
>>>
>>> Ah, so basically, if the last packet from the client is dropped, the
>>> server is not going to notice that the test ended and just keep
>>> counting? That would definitely explain the behaviour I'm seeing.
>>>
>>> So if another test starts from a different source port, the server is
>>> still going to count the same totals? That seems kinda odd :)
>>>
>>> > Since we know UDP is unreliable, we do control both client and server
>>> over
>>> > ssh pipes, and perform summing in flight per the interval reporting.
>>> > Operating system signals are used to kill the server. The iperf
>>> sum and
>>> > final reports are ignored. Unfortunately, I can't publish this
>>> package
>>> > with iperf 2 for both technical and licensing reasons. There is some
>>> skeleton
>>> > code in Python 3.5 with asyncio
>>> > <https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py>
>>> that
>>> > may be of use. A next step here is to add support for pandas
>>> > <http://pandas.pydata.org/index.html>, and possibly some control chart
>>> > <https://en.wikipedia.org/wiki/Control_chart> techniques (both single
>>> and
>>> > multivariate
>>> > <http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc34.htm>) for
>>> both
>>> > regressions and outlier detection.
>>>
>>> No worries, I already have the setup scripts to handle restarting the
>>> server, and I parse the output with Flent. Just wanted to point out this
>>> behaviour as it was giving me some very odd results before I started
>>> systematically restarting the server...
>>>
>>> -Toke
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20171013/c7d358fc/attachment-0001.html>
More information about the Make-wifi-fast
mailing list