[Rpm] [ippm] Preliminary measurement comparison of "Working Latency" metrics
rjmcmahon
rjmcmahon at rjmcmahon.com
Mon Oct 31 18:44:11 EDT 2022
One can download iperf version 2.1.8 tarball or use git to get the
latest and compile from master. Sourceforge supports both.
https://sourceforge.net/projects/iperf2/
A simple test is to use the --bounceback and then, with load, add
--bounceback-congest (see below) or maybe use one way congestion vs the
default of full-duplex.
Build instructions is in the INSTALL file. Should work on linux,
windows, BSD, etc.
I think with WiFi the distributions may be more than bi-modal and one
second decisions may be too short. It doesn't mean that the traffic has
to be active and at capacity per the full decision interval, rather the
forwarding/phy state machines & other logic may need more than one
second to find their local optimums. Also, maybe some sort of online
mean & min shift detections could help? Or one can use the inP metric
too. This is calculated per Little's law. Though OWD requires clock sync
indicated as being done to iperf via the --trip-times option.
I added your delay docs into the iperf2 docs directory. Thanks for
mentioning them. They're helpful.
https://sourceforge.net/p/iperf2/code/ci/master/tree/doc/delay_docs/
Bob
[root at ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback
------------------------------------------------------------
Client connecting to 192.168.1.231, TCP port 5001 with pid 717413 (1
flows)
Write buffer size: 100 Byte
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs &
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.15%enp2s0 port 43538 connected with 192.168.1.231
port 5001 (bb w/quickack len/hold=100/0) (sock=3)
(icwnd/mss/irtt=14/1448/4442) (ct=4.55 ms) on 2022-10-31 15:25:56 (PDT)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=3.802/2.008/5.765/1.243 ms 0 14K/3958 us 263 rps
[ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=3.836/2.179/5.112/0.884 ms 0 14K/3727 us 261 rps
[ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=3.941/3.576/4.221/0.159 ms 0 14K/3800 us 254 rps
[ 1] 3.00-4.00 sec 1.95 KBytes 16.0 Kbits/sec
10=4.042/3.790/4.847/0.295 ms 0 14K/3853 us 247 rps
[ 1] 4.00-5.00 sec 1.95 KBytes 16.0 Kbits/sec
10=3.930/3.699/4.162/0.123 ms 0 14K/3834 us 254 rps
[ 1] 5.00-6.00 sec 1.95 KBytes 16.0 Kbits/sec
10=4.037/3.752/4.612/0.238 ms 0 14K/3858 us 248 rps
[ 1] 6.00-7.00 sec 1.95 KBytes 16.0 Kbits/sec
10=3.941/3.827/4.045/0.078 ms 0 14K/3833 us 254 rps
[ 1] 7.00-8.00 sec 1.95 KBytes 16.0 Kbits/sec
10=4.042/3.805/4.778/0.277 ms 0 14K/3869 us 247 rps
[ 1] 8.00-9.00 sec 1.95 KBytes 16.0 Kbits/sec
10=3.935/3.749/4.099/0.127 ms 0 14K/3822 us 254 rps
[ 1] 9.00-10.00 sec 1.95 KBytes 16.0 Kbits/sec
10=4.041/3.783/4.885/0.320 ms 0 14K/3863 us 247 rps
[ 1] 0.00-10.02 sec 19.5 KBytes 16.0 Kbits/sec
100=3.955/2.008/5.765/0.503 ms 0 14K/3872 us 253 rps
[ 1] 0.00-10.02 sec BB8(f)-PDF:
bin(w=100us):cnt(100)=21:1,22:1,24:1,28:1,30:1,32:1,36:2,37:2,38:6,39:20,40:24,41:25,42:4,43:1,44:1,47:1,48:1,49:3,51:1,52:2,58:1
(5.00/95.00/99.7%=30/49/58,Outliers=0,obl/obu=0/0)
[root at ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback
--bounceback-congest
------------------------------------------------------------
Client connecting to 192.168.1.231, TCP port 5001 with pid 717416 (1
flows)
Write buffer size: 100 Byte
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs &
tcp_quickack)
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 2] local 192.168.1.15%enp2s0 port 43540 connected with 192.168.1.231
port 5001 (full-duplex) (sock=4) (qack) (icwnd/mss/irtt=14/1448/4000)
(ct=4.10 ms) on 2022-10-31 15:26:15 (PDT)
[ 1] local 192.168.1.15%enp2s0 port 43542 connected with 192.168.1.231
port 5001 (bb w/quickack len/hold=100/0) (sock=3)
(icwnd/mss/irtt=14/1448/4003) (ct=4.10 ms) on 2022-10-31 15:26:15 (PDT)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=13.770/1.855/50.097/15.368 ms 0 14K/14387 us 73 rps
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT(var) NetPwr
[ 2] 0.00-1.00 sec 45.2 MBytes 379 Mbits/sec 474213/0 0
7611K/23840(768) us 1989
[ ID] Interval Transfer Bandwidth Reads=Dist
[ *2] 0.00-1.00 sec 29.8 MBytes 250 Mbits/sec
312674=78:10:29:14:57:17:87:14
[ ID] Interval Transfer Bandwidth
[FD2] 0.00-1.00 sec 75.0 MBytes 629 Mbits/sec
[ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=25.213/15.954/48.097/9.889 ms 0 14K/23702 us 40 rps
[ *2] 1.00-2.00 sec 45.6 MBytes 383 Mbits/sec
478326=4:19:3:10:16:12:12:13
[ 2] 1.00-2.00 sec 56.4 MBytes 473 Mbits/sec 591760/0 0
7611K/14112(1264) us 4193
[FD2] 1.00-2.00 sec 102 MBytes 856 Mbits/sec
[ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=25.220/11.400/51.321/14.320 ms 0 14K/25023 us 40 rps
[ *2] 2.00-3.00 sec 39.1 MBytes 328 Mbits/sec
410533=18:32:13:15:25:27:7:23
[ 2] 2.00-3.00 sec 57.2 MBytes 480 Mbits/sec 599942/0 0
7611K/14356(920) us 4179
[FD2] 2.00-3.00 sec 96.4 MBytes 808 Mbits/sec
[ 1] 3.00-4.00 sec 1.95 KBytes 16.0 Kbits/sec
10=16.023/5.588/50.304/12.940 ms 0 14K/16648 us 62 rps
[ *2] 3.00-4.00 sec 35.9 MBytes 301 Mbits/sec
376791=17:34:11:15:33:37:22:26
[ 2] 3.00-4.00 sec 56.6 MBytes 475 Mbits/sec 593663/0 0
7611K/45828(2395) us 1295
[FD2] 3.00-4.00 sec 92.5 MBytes 776 Mbits/sec
[ 1] 4.00-5.00 sec 1.95 KBytes 16.0 Kbits/sec
10=12.445/7.353/19.663/3.601 ms 0 14K/13359 us 80 rps
[ *2] 4.00-5.00 sec 42.4 MBytes 356 Mbits/sec
444818=29:34:8:15:21:26:15:20
[ 2] 4.00-5.00 sec 54.9 MBytes 461 Mbits/sec 575659/0 0
7611K/13205(2359) us 4359
[FD2] 4.00-5.00 sec 97.3 MBytes 816 Mbits/sec
[ 1] 5.00-6.00 sec 1.95 KBytes 16.0 Kbits/sec
10=12.092/6.493/16.031/3.279 ms 0 14K/12345 us 83 rps
[ *2] 5.00-6.00 sec 37.9 MBytes 318 Mbits/sec
397808=10:22:20:28:24:20:11:10
[ 2] 5.00-6.00 sec 56.9 MBytes 477 Mbits/sec 596477/0 0
7611K/15607(167) us 3822
[FD2] 5.00-6.00 sec 94.8 MBytes 795 Mbits/sec
[ 1] 6.00-7.00 sec 1.95 KBytes 16.0 Kbits/sec
10=13.916/9.555/19.109/2.876 ms 0 14K/13690 us 72 rps
[ *2] 6.00-7.00 sec 46.0 MBytes 386 Mbits/sec
482572=25:46:13:18:34:35:27:32
[ 2] 6.00-7.00 sec 57.2 MBytes 480 Mbits/sec 600055/0 0
7611K/14175(669) us 4233
[FD2] 6.00-7.00 sec 103 MBytes 866 Mbits/sec
[ 1] 7.00-8.00 sec 1.95 KBytes 16.0 Kbits/sec
10=13.125/4.997/19.527/4.239 ms 0 14K/13695 us 76 rps
[ *2] 7.00-8.00 sec 40.1 MBytes 336 Mbits/sec
420304=21:30:11:6:5:17:14:18
[ 2] 7.00-8.00 sec 56.0 MBytes 470 Mbits/sec 587082/0 0
7611K/24914(3187) us 2356
[FD2] 7.00-8.00 sec 96.1 MBytes 806 Mbits/sec
[ 1] 8.00-9.00 sec 1.95 KBytes 16.0 Kbits/sec
10=8.221/2.939/17.274/4.419 ms 0 14K/9861 us 122 rps
[ *2] 8.00-9.00 sec 33.6 MBytes 282 Mbits/sec
352847=17:35:16:19:15:27:17:17
[ 2] 8.00-9.00 sec 56.2 MBytes 471 Mbits/sec 589281/0 0
7611K/13206(403) us 4462
[FD2] 8.00-9.00 sec 89.8 MBytes 754 Mbits/sec
[ 1] 9.00-10.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.896/6.822/14.941/2.788 ms 0 14K/10419 us 92 rps
[ *2] 9.00-10.00 sec 36.4 MBytes 306 Mbits/sec
382097=18:48:22:25:29:32:28:24
[ 2] 9.00-10.00 sec 57.2 MBytes 480 Mbits/sec 599784/0 0
7611K/15965(850) us 3757
[FD2] 9.00-10.00 sec 93.6 MBytes 785 Mbits/sec
[ 2] 0.00-10.00 sec 554 MBytes 465 Mbits/sec 5807917/0 0
7611K/15965(850) us 3637
[ *2] 0.00-10.07 sec 391 MBytes 326 Mbits/sec
4099553=239:319:147:168:261:251:241:200
[FD2] 0.00-10.07 sec 945 MBytes 787 Mbits/sec
[ 1] 0.00-10.07 sec 21.5 KBytes 17.5 Kbits/sec
110=14.330/1.855/51.321/9.904 ms 0 14K/6896 us 70 rps
[ 1] 0.00-10.07 sec BB8(f)-PDF:
bin(w=100us):cnt(110)=19:1,30:1,33:2,34:1,40:1,45:2,47:1,50:1,51:1,55:1,56:1,58:1,64:1,65:1,66:1,67:1,69:1,70:2,71:1,74:1,77:1,79:2,83:2,84:1,90:1,93:2,96:3,97:2,99:1,100:1,103:2,105:1,111:1,112:1,113:2,114:2,115:1,116:1,118:2,119:1,122:2,123:1,124:1,127:1,129:2,130:3,133:1,143:1,144:1,145:1,147:3,149:4,150:2,156:1,157:2,158:1,160:1,161:1,162:1,163:1,164:1,167:1,173:1,181:1,182:2,186:1,192:1,196:1,197:1,199:1,218:1,221:1,235:1,242:1,246:1,278:1,281:1,282:1,286:1,342:1,481:1,496:1,501:1,504:1,514:1
(5.00/95.00/99.7%=40/342/514,Outliers=0,obl/obu=0/0)
[ CT] final connect times (min/avg/max/stdev) = 4.104/4.104/4.104/0.000
ms (tot/err) = 2/0>> -----Original Message-----
>> From: rjmcmahon <rjmcmahon at rjmcmahon.com>
>> Sent: Monday, October 31, 2022 2:53 PM
>> To: Dave Taht <dave.taht at gmail.com>
>> Cc: MORTON JR., AL <acmorton at att.com>; Rpm
>> <rpm at lists.bufferbloat.net>;
>> ippm at ietf.org
>> Subject: Re: [Rpm] [ippm] Preliminary measurement comparison of
>> "Working
>> Latency" metrics
>>
>> Would it be possible to get some iperf 2 bounceback test results too?
>>
>> https://urldefense.com/v3/__https://sourceforge.net/projects/iperf2/__;!!BhdT!
>> gmYpYN3pBO-aWMfjDRdVRFQ20aHQ5nDHOhEVY1y-
>> MkFFyH8YmM4wf8cEtaxzvcwTMaCaJOCNRBtj0tnz9A$
>>
>> Also, for the hunt algo, maybe use TCP first to get a starting point
>> and
>> then hunt? Just a thought.
>>
>> Thanks,
>> Bob
>> >
> <snip>
>
> Thanks for your suggestion, Bob, and it's nice to meet you!
> I was only familiar with the "old" iperf2 at:
> https://iperf.fr/iperf-doc.php
> before your message and URL arrived (yet another segment of the
> elephant!).
>
> I didn't quickly find whether your code will run on linux (ubuntu for
> me). I suppose I use the .tar.gz and compile locally. Let me know,
> I've got some other tools to check-out first.
>
> You suggest the bounceback test option (I found it on the man page),
> but there are lots of options and six versions of bounceback! Based
> on the results I already reported, can you suggest a specific version
> of bounceback and a set of options that would be a good first try?
> (see the man page at https://iperf2.sourceforge.io/iperf-manpage.html )
>
> Regarding our hunt algorithms (search for max), the new Type C algo
> (in release 7.5.0) locates the Max very quickly. I showed a comparison
> of our Type B and Type C search algorithms on a slide at the IPPM
> meeting in July. See Slide 10:
> https://datatracker.ietf.org/meeting/114/materials/slides-114-ippm-sec-dir-review-discussion-test-protocol-for-one-way-ip-capacity-measurement-00
>
> The measured capacity reaches 1 Gbps in about 1 second with the Type C
> algorithm, without choosing or testing for a starting point (our
> default starting point rate is ~500kbps, very low to accommodate "any"
> subscribed rate).
>
> regards, and thanks again,
> Al
More information about the Rpm
mailing list