[Bloat] iperf 2 bounceback - independent request/reply sizes

rjmcmahon rjmcmahon at rjmcmahon.com
Fri May 12 12:00:44 EDT 2023


For completeness, here are the bounceback cli options.

--bounceback[=n]
run a TCP bounceback or rps test with optional number writes in a burst 
per value of n. The default is ten writes every period and the default 
period is one second (Note: set size with --bounceback-request). See 
NOTES on clock unsynchronized detections.
--bounceback-hold n
request the server to insert a delay of n milliseconds between its read 
and write (default is no delay)
--bounceback-no-quickack
request the server not set the TCP_QUICKACK socket option (disabling TCP 
ACK delays) during a bounceback test (see NOTES)
--bounceback-period[=n]
request the client schedule its send(s) every n seconds (default is one 
second, use zero value for immediate or continuous back to back)
--bounceback-request n
set the bounceback request size in units bytes. Default value is 100 
bytes.
--bounceback-reply n
set the bounceback reply size in units bytes. This supports asymmetric 
message sizes between the request and the reply. Default value is zero, 
which uses the value of --bounceback-request.

Note: Coming up with a weighted graph (delays & sizes) for working-load 
is on the todo list. Thoughts about this are appreciated. Defining a 
graph on the cli seems a requirement.

Bob

> Hi All,
> 
> I received a recent diff for iperf 2 to support independent request
> and reply sizes for the bounceback test. It's nice to get diffs that
> can be patched in!
> 
> [root at ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback 
> --bounceback-reply 512K
> ------------------------------------------------------------
> Client connecting to 192.168.1.231, TCP port 5001 with pid 305401 (1 
> flows)
> Bounceback test (req/reply size = 100 Byte/ 512 KByte) (server hold
> req=0 usecs & tcp_quickack)
> Bursting request 10 times every 1.00 second(s)
> TCP congestion control using reno
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 85.0 KByte (default)
> ------------------------------------------------------------
> [  1] local 192.168.1.15%enp2s0 port 42800 connected with
> 192.168.1.231 port 5001 (bb w/quickack len/hold=100/0) (sock=3)
> (icwnd/mss/irtt=14/1448/3302) (ct=3.36 ms) on 2023-05-12 08:36:57.163
> (PDT)
> [ ID] Interval        Transfer    Bandwidth         BB
> cnt=avg/min/max/stdev         Rtry  Cwnd/RTT    RPS(avg)
> [  1] 0.00-1.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=10.924/7.497/27.463/5.971 ms    0   14K/3992 us    92 rps
> [  1] 1.00-2.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=10.068/7.274/21.120/3.963 ms    0   14K/4307 us    99 rps
> [  1] 2.00-3.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=9.674/8.148/17.413/2.798 ms    0   14K/4243 us    103 rps
> [  1] 3.00-4.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=9.858/7.587/20.889/3.961 ms    0   14K/4474 us    101 rps
> [  1] 4.00-5.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=9.872/7.558/17.720/2.842 ms    0   14K/4692 us    101 rps
> [  1] 5.00-6.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=9.649/6.844/18.537/3.205 ms    0   14K/4301 us    104 rps
> [  1] 6.00-7.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=9.502/7.083/19.839/3.697 ms    0   14K/4153 us    105 rps
> [  1] 7.00-8.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=9.965/7.747/22.194/4.350 ms    0   14K/4357 us    100 rps
> [  1] 8.00-9.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=10.072/7.936/20.307/3.730 ms    0   14K/4442 us    99 rps
> [  1] 9.00-10.00 sec  5.00 MBytes  42.0 Mbits/sec
> 10=10.031/8.109/19.907/3.551 ms    0   14K/4086 us    100 rps
> [  1] 0.00-10.02 sec  50.0 MBytes  41.9 Mbits/sec
> 100=9.962/6.844/27.463/3.740 ms    0   14K/4152 us    100 rps
> [  1] 0.00-10.02 sec  BB8(f)-PDF:
> bin(w=100us):cnt(100)=69:1,71:1,73:1,75:1,76:3,77:1,78:2,79:3,80:3,81:1,82:6,83:7,84:1,85:3,86:4,87:4,88:4,89:5,90:7,91:3,92:4,93:2,95:8,96:3,97:1,98:1,99:1,101:3,102:1,103:1,104:1,106:2,123:1,175:1,178:1,186:1,199:1,200:1,204:1,209:1,212:1,222:1,275:1
> (5.00/95.00/99.7%=76/204/275,Outliers=1,obl/obu=0/0)
> 
> Bob


More information about the Bloat mailing list