<div dir="ltr">FYI, here's an example output using two brix PCs over their wired ethernet connected by a Cisco SG300 switch. Note: the ptpd stats (not shown) displays the clock corrections and suggests them to be within 10 microseconds.<br><div><br></div><div>[rjmcmahon@rjm-fedora etc]$ iperf -c 192.168.100.33 -u -e -t 2 -b 2kpps </div><div>------------------------------------------------------------</div><div>Client connecting to 192.168.100.33, UDP port 5001 with pid 29952</div><div>Sending 1470 byte datagrams, IPG target: 500.00 us (kalman adjust)</div><div>UDP buffer size: 208 KByte (default)</div><div>------------------------------------------------------------</div><div>[ 3] local 192.168.100.67 port 50062 connected with 192.168.100.33 port 5001</div><div>[ ID] Interval Transfer Bandwidth PPS</div><div>[ 3] 0.00-2.00 sec 5.61 MBytes 23.5 Mbits/sec 1999 pps</div><div>[ 3] Sent 4001 datagrams</div><div>[ 3] Server Report:</div><div>[ 3] 0.00-2.00 sec 5.61 MBytes 23.5 Mbits/sec 0.012 ms 0/ 4001 (0%) -/-/-/- ms 2000 pps</div><div><br></div><div>[rjmcmahon@hera ~]$ iperf -s -u -e -i 0.1</div><div>------------------------------------------------------------</div><div>Server listening on UDP port 5001 with pid 5178</div><div>Receiving 1470 byte datagrams</div><div>UDP buffer size: 208 KByte (default)</div><div>------------------------------------------------------------</div><div>[ 3] local 192.168.100.33 port 5001 connected with 192.168.100.67 port 57325</div><div>[ ID] Interval Transfer Bandwidth Jitter Lost/Total Latency avg/min/max/stdev PPS</div><div>[ 3] 0.00-0.10 sec 289 KBytes 23.6 Mbits/sec 0.013 ms 0/ 201 (0%) 0.142/ 0.018/ 0.192/ 0.025 ms 2005 pps</div><div>[ 3] 0.10-0.20 sec 287 KBytes 23.5 Mbits/sec 0.020 ms 0/ 200 (0%) 0.157/ 0.101/ 0.207/ 0.015 ms 1999 pps</div><div>[ 3] 0.20-0.30 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200 (0%) 0.155/ 0.071/ 0.212/ 0.018 ms 2002 pps</div><div>[ 3] 0.30-0.40 sec 287 KBytes 23.5 Mbits/sec 0.018 ms 0/ 200 (0%) 0.146/-0.007/ 0.187/ 0.018 ms 1999 pps</div><div>[ 3] 0.40-0.50 sec 287 KBytes 23.5 Mbits/sec 0.021 ms 0/ 200 (0%) 0.151/ 0.021/ 0.208/ 0.018 ms 2000 pps</div><div>[ 3] 0.50-0.60 sec 287 KBytes 23.5 Mbits/sec 0.016 ms 0/ 200 (0%) 0.148/ 0.043/ 0.192/ 0.018 ms 2000 pps</div><div>[ 3] 0.60-0.70 sec 287 KBytes 23.5 Mbits/sec 0.019 ms 0/ 200 (0%) 0.152/ 0.041/ 0.199/ 0.018 ms 2001 pps</div><div>[ 3] 0.70-0.80 sec 287 KBytes 23.5 Mbits/sec 0.016 ms 0/ 200 (0%) 0.144/ 0.071/ 0.206/ 0.017 ms 2001 pps</div><div>[ 3] 0.80-0.90 sec 287 KBytes 23.5 Mbits/sec 0.015 ms 0/ 200 (0%) 0.140/ 0.111/ 0.186/ 0.014 ms 1999 pps</div><div>[ 3] 0.90-1.00 sec 287 KBytes 23.5 Mbits/sec 0.022 ms 0/ 200 (0%) 0.154/ 0.111/ 0.222/ 0.019 ms 2000 pps</div><div>[ 3] 1.00-1.10 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200 (0%) 0.152/ 0.036/ 0.197/ 0.017 ms 2000 pps</div><div>[ 3] 1.10-1.20 sec 287 KBytes 23.5 Mbits/sec 0.015 ms 0/ 200 (0%) 0.153/-0.007/ 0.186/ 0.020 ms 2001 pps</div><div>[ 3] 1.20-1.30 sec 287 KBytes 23.5 Mbits/sec 0.013 ms 0/ 200 (0%) 0.149/ 0.035/ 0.207/ 0.018 ms 2000 pps</div><div>[ 3] 1.30-1.40 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200 (0%) 0.160/ 0.116/ 0.233/ 0.018 ms 2000 pps</div><div>[ 3] 1.40-1.50 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200 (0%) 0.159/ 0.122/ 0.207/ 0.015 ms 2000 pps</div><div>[ 3] 1.50-1.60 sec 287 KBytes 23.5 Mbits/sec 0.016 ms 0/ 200 (0%) 0.158/ 0.066/ 0.201/ 0.015 ms 1999 pps</div><div>[ 3] 1.60-1.70 sec 287 KBytes 23.5 Mbits/sec 0.017 ms 0/ 200 (0%) 0.162/ 0.076/ 0.203/ 0.016 ms 2000 pps</div><div>[ 3] 1.70-1.80 sec 287 KBytes 23.5 Mbits/sec 0.014 ms 0/ 200 (0%) 0.154/ 0.073/ 0.195/ 0.016 ms 2002 pps</div><div>[ 3] 1.80-1.90 sec 287 KBytes 23.5 Mbits/sec 0.015 ms 0/ 200 (0%) 0.154/ 0.113/ 0.213/ 0.017 ms 1999 pps</div><div>[ 3] 1.90-2.00 sec 287 KBytes 23.5 Mbits/sec 0.019 ms 0/ 200 (0%) 0.152/ 0.124/ 0.208/ 0.016 ms 1999 pps</div><div>[ 3] 0.00-2.00 sec 5.61 MBytes 23.5 Mbits/sec 0.019 ms 0/ 4001 (0%) 0.151/-0.007/ 0.233/ 0.018 ms 2000 pps</div><div><br><br>Here's a full line rate (1Gbs ethernet) run<br><br><div>[rjmcmahon@rjm-fedora etc]$ iperf -c 192.168.100.33 -u -e -t 2 -b 200kpss -i 1 -w 2M</div><div>------------------------------------------------------------</div><div>Client connecting to 192.168.100.33, UDP port 5001 with pid 30626</div><div>Sending 1470 byte datagrams, IPG target: 5.00 us (kalman adjust)</div><div>UDP buffer size: 416 KByte (WARNING: requested 2.00 MByte)</div><div>------------------------------------------------------------</div><div>[ 3] local 192.168.100.67 port 57349 connected with 192.168.100.33 port 5001</div><div>[ ID] Interval Transfer Bandwidth PPS</div><div>[ 3] 0.00-1.00 sec 114 MBytes 958 Mbits/sec 81421 pps</div><div>[ 3] 1.00-2.00 sec 114 MBytes 958 Mbits/sec 81380 pps</div><div>[ 3] 0.00-2.00 sec 228 MBytes 957 Mbits/sec 81399 pps</div><div>[ 3] Sent 162874 datagrams</div><div>[ 3] Server Report:</div><div>[ 3] 0.00-2.00 sec 228 MBytes 957 Mbits/sec 0.084 ms 0/162874 (0%) 1.641/ 0.253/ 2.466/ 0.114 ms 81383 pps</div><div><br></div><br><div>[ 3] local 192.168.100.33 port 5001 connected with 192.168.100.67 port 57349</div><div>[ 3] 0.00-0.10 sec 11.4 MBytes 960 Mbits/sec 0.016 ms 0/ 8166 (0%) 1.615/ 0.253/ 2.309/ 0.355 ms 81606 pps</div><div>[ 3] 0.10-0.20 sec 11.4 MBytes 956 Mbits/sec 0.016 ms 0/ 8133 (0%) 1.657/ 0.936/ 2.325/ 0.348 ms 81350 pps</div><div>[ 3] 0.20-0.30 sec 11.4 MBytes 957 Mbits/sec 0.021 ms 0/ 8139 (0%) 1.657/ 0.950/ 2.400/ 0.348 ms 81383 pps</div><div>[ 3] 0.30-0.40 sec 11.4 MBytes 958 Mbits/sec 0.016 ms 0/ 8144 (0%) 1.652/ 0.953/ 2.357/ 0.342 ms 81380 pps</div><div>[ 3] 0.40-0.50 sec 11.4 MBytes 956 Mbits/sec 0.069 ms 0/ 8131 (0%) 1.644/ 0.947/ 2.368/ 0.341 ms 81384 pps</div><div>[ 3] 0.50-0.60 sec 11.4 MBytes 957 Mbits/sec 0.072 ms 0/ 8138 (0%) 1.649/ 0.949/ 2.381/ 0.337 ms 81372 pps</div><div>[ 3] 0.60-0.70 sec 11.4 MBytes 957 Mbits/sec 0.025 ms 0/ 8139 (0%) 1.639/ 0.952/ 2.357/ 0.342 ms 81383 pps</div><div>[ 3] 0.70-0.80 sec 11.4 MBytes 957 Mbits/sec 0.014 ms 0/ 8135 (0%) 1.643/ 0.944/ 2.368/ 0.343 ms 81390 pps</div><div>[ 3] 0.80-0.90 sec 11.4 MBytes 957 Mbits/sec 0.016 ms 0/ 8142 (0%) 1.639/ 0.946/ 2.361/ 0.341 ms 81366 pps</div><div>[ 3] 0.90-1.00 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8142 (0%) 1.635/ 0.932/ 2.378/ 0.342 ms 81387 pps</div><div>[ 3] 1.00-1.10 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8138 (0%) 1.633/ 0.934/ 2.359/ 0.341 ms 81373 pps</div><div>[ 3] 1.10-1.20 sec 11.4 MBytes 957 Mbits/sec 0.010 ms 0/ 8135 (0%) 1.636/ 0.947/ 2.361/ 0.342 ms 81444 pps</div><div>[ 3] 1.20-1.30 sec 11.4 MBytes 957 Mbits/sec 0.091 ms 0/ 8140 (0%) 1.624/ 0.908/ 2.363/ 0.354 ms 81400 pps</div><div>[ 3] 1.30-1.40 sec 11.4 MBytes 956 Mbits/sec 0.016 ms 0/ 8133 (0%) 1.616/ 0.917/ 2.325/ 0.345 ms 81296 pps</div><div>[ 3] 1.40-1.50 sec 11.4 MBytes 957 Mbits/sec 0.012 ms 0/ 8138 (0%) 1.626/ 0.918/ 2.361/ 0.346 ms 81414 pps</div><div>[ 3] 1.50-1.60 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8136 (0%) 1.626/ 0.934/ 2.352/ 0.339 ms 81339 pps</div><div>[ 3] 1.60-1.70 sec 11.4 MBytes 957 Mbits/sec 0.015 ms 0/ 8141 (0%) 1.633/ 0.930/ 2.351/ 0.341 ms 81376 pps</div><div>[ 3] 1.70-1.80 sec 11.4 MBytes 956 Mbits/sec 0.017 ms 0/ 8133 (0%) 1.627/ 0.929/ 2.354/ 0.339 ms 81377 pps</div><div>[ 3] 1.80-1.90 sec 11.4 MBytes 957 Mbits/sec 0.088 ms 0/ 8139 (0%) 1.659/ 0.895/ 2.396/ 0.343 ms 81330 pps</div><div>[ 3] 1.90-2.00 sec 11.4 MBytes 956 Mbits/sec 0.013 ms 0/ 8128 (0%) 1.709/ 0.996/ 2.466/ 0.342 ms 81348 pps</div><div>[ 3] 0.00-2.00 sec 228 MBytes 957 Mbits/sec 0.085 ms 0/162874 (0%) 1.641/ 0.253/ 2.466/ 0.344 ms 81383 pps</div><div><br>I'll be testing with 802.11ax chips soon but probably can't share those numbers. Sorry about that. <br><br></div><div>Bob</div><div><br></div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 13, 2017 at 12:41 PM, Bob McMahon <span dir="ltr"><<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">PS. Thanks for writing flent and making it available. I'm a novice w/flent but do plan to learn it.<br><br>Bob</div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 13, 2017 at 11:47 AM, Bob McMahon <span dir="ltr"><<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Toke,<br><br>The other thing that will cause the server thread(s) and listener thread to stop is -t when applied to the *server*, i.e. iperf -s -u -t 10 will cause a 10 second timeout for the server/listener thread(s) life. Some people don't want the Listener to stop so when -D (daemon) is applied, the -t will only terminate server trafffic threads. Many people asked for this because they wanted a way to time bound these threads, specifically over the life of many tests.<br><br>Yeah, summing is a bit of a mess. I've some proto code I've been playing with but still not sure what is going to be released. <br><br>For UDP, the source port must be unique per the quintuple (ip proto/src ip/ src port/ dst ip/ dst port). Since the UDP server is merely waiting for packets it doesn't have an knowledge about how to group. So it groups based upon time, i.e. when a new traffic shows up it's put an existing active group for summing. <br><br>I'm not sure a good way to fix this. I think the client would have to modify the payload, and per a -P tell the server the udp src ports that belong in the same group. Then the server could assign groups based upon a key in the payload.<br><br>Thoughts and comments welcome,<br>Bob</div><div class="m_2865369337461666538HOEnZb"><div class="m_2865369337461666538h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 13, 2017 at 2:28 AM, Toke Høiland-Jørgensen <span dir="ltr"><<a href="mailto:toke@toke.dk" target="_blank">toke@toke.dk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>Bob McMahon <<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>> writes:<br>
<br>
> Thanks Toke. Let me look into this. Is there packet loss during your<br>
> tests? Can you share the output of the client and server per the error<br>
> scenario?<br>
<br>
</span>Yeah, there's definitely packet loss.<br>
<span><br>
> With iperf 2 there is no TCP test exchange rather UDP test information<br>
> is derived from packets in flight. The server determines a UDP test is<br>
> finished by detecting a negative sequence number in the payload. In<br>
> theory, this should separate UDP tests. The server detects a new UDP<br>
> stream is by receiving a packet from a new source socket. If the<br>
> packet carrying the negative sequence number is lost then summing<br>
> across "tests" would be expected (even though not desired) per the<br>
> current design and implementation. We intentionally left this as is as<br>
> we didn't want to change the startup behavior nor require the network<br>
> support TCP connections in order to run a UDP test.<br>
<br>
</span>Ah, so basically, if the last packet from the client is dropped, the<br>
server is not going to notice that the test ended and just keep<br>
counting? That would definitely explain the behaviour I'm seeing.<br>
<br>
So if another test starts from a different source port, the server is<br>
still going to count the same totals? That seems kinda odd :)<br>
<span><br>
> Since we know UDP is unreliable, we do control both client and server over<br>
> ssh pipes, and perform summing in flight per the interval reporting.<br>
> Operating system signals are used to kill the server. The iperf sum and<br>
> final reports are ignored. Unfortunately, I can't publish this package<br>
> with iperf 2 for both technical and licensing reasons. There is some skeleton<br>
> code in Python 3.5 with asyncio<br>
</span>> <<a href="https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py" rel="noreferrer" target="_blank">https://sourceforge.net/p/ipe<wbr>rf2/code/ci/master/tree/flows/<wbr>flows.py</a>> that<br>
<span>> may be of use. A next step here is to add support for pandas<br>
</span>> <<a href="http://pandas.pydata.org/index.html" rel="noreferrer" target="_blank">http://pandas.pydata.org/inde<wbr>x.html</a>>, and possibly some control chart<br>
> <<a href="https://en.wikipedia.org/wiki/Control_chart" rel="noreferrer" target="_blank">https://en.wikipedia.org/wiki<wbr>/Control_chart</a>> techniques (both single and<br>
> multivariate<br>
> <<a href="http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc34.htm" rel="noreferrer" target="_blank">http://www.itl.nist.gov/div89<wbr>8/handbook/pmc/section3/pmc34.<wbr>htm</a>>) for both<br>
> regressions and outlier detection.<br>
<br>
No worries, I already have the setup scripts to handle restarting the<br>
server, and I parse the output with Flent. Just wanted to point out this<br>
behaviour as it was giving me some very odd results before I started<br>
systematically restarting the server...<br>
<span class="m_2865369337461666538m_-9130809184920769058HOEnZb"><font color="#888888"><br>
-Toke<br>
</font></span></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>