On the tools, iperf 2.0.14 is going through a lot of development. My hope is to have the code done soon so it can be tested internally at Broadcom. We're testing with WiFi , to 100G NICs and thousands of parallel threads. I've been able to find time for this refactoring per COVID-19 stay at home work.
What I think the industry should move to is measuring both throughput and latency in a direct manner. 2.0.14 also supports full duplex traffic (as well as --reverse) TCP server output shows the following (these are 10G NICs)
NOTES
Numeric options: Some numeric options support format characters per '<value>c' (e.g. 10M) where the c format characters are k,m,g,K,M,G. Lowercase format characters are 10^3 based and uppercase are 2^n based, e.g. 1k = 1000, 1K = 1024, 1m =
1,000,000 and 1M = 1,048,576
Rate limiting: The -b option supports read and write rate limiting at the application level. The -b option on the client also supports variable offered loads through the <mean>,<standard deviation> format, e.g. -b 100m,10m. The distribution used
is log normal. Similar for the isochronous option. The -b on the server rate limits the reads. Socket based pacing is also supported using the --fq-rate long option. This will work with the --reverse and --bidir options as well.
Synchronized clocks: The --trip-times option indicates that the client's and server's clocks are synchronized to a common reference. Network Time Protocol (NTP) or Precision Time Protocol (PTP) are commonly used for this. The reference clock(s)
error and the synchronization protocols will affect the accuracy of any end to end latency measurements.
Binding is done at the logical level (ip address or layer 3) using the -B option and at the device (or layer 2) level using the percent (%) separator for both the client and tne server. On the client, the -B option affects the bind(2) system call,
and will set the source ip address and the source port, e.g. iperf -c <host> -B
192.168.100.2:6002. This controls the packet's source values but not routing. These can be confusing in that a route or device lookup may not be that of the device
with the configured source IP. So, for example, if the IP address of eth0 is used for -B and the routing table for the destination IP address resolves the output interface to be eth1, then the host will send the packet out device eth1 while using
the source IP address of eth0 in the packet. To affect the physical output interface (e.g. dual homed systems) either use -c <host>%<dev> (requires root) which bypasses this host route table lookup, or configure policy routing per each -B source
address and set the output interface appropriately in the policy routes. On the server or receive, only packets destined to -B IP address will be received. It's also useful for multicast. For example, iperf -s -B 224.0.0.1%eth0 will only accept ip
multicast packets with dest ip 224.0.0.1 that are received on the eth0 interface, while iperf -s -B 224.0.0.1 will receive those packets on any interface, Finally, the device specifier is required for v6 link-local, e.g. -c [v6addr]%<dev> -V, to
select the output interface.
Reverse and bidirectional traffic: The --reverse (-R) and --bidir options can be confusing when compared to the legacy options of -r and -d. It's suggested to use --reverse if you want to test through a NAT firewall (or -R on non-windows systems).
This applies role reversal of the test after opening the full duplex socket. The latter two of -d and -r remain supported for legacy support and compatibility reasons. These open new sockets in the opposite direction vs treat the originating
socket as full duplex. Firewall piercing is typically required to use -d and -r if a NAT gateway is in the path. That's part of the reason it's highly encouraged to use the newer --reverse and --bidir and deprecate the use of the -r and -d options.
Also, the --reverse -b <rate> setting behaves differently for TCP and UDP. For TCP it will rate limit the read side, i.e. the iperf client (role reversed to act as a server) reading from the full duplex socket. This will in turn flow control the
reverse traffic per standard TCP congestion control. The --reverse -b <rate> will be applied on transmit (i.e. the server role reversed to act as a client) for UDP since there is no flow control with UDP. There is no option to directly rate limit
the writes with TCP testing when using --reverse.
TCP Connect times: The TCP connect time (or three way handshake) can be seen on the iperf client when the -e (--enhancedreports) option is set. Look for the ct=<value> in the connected message,
e.g.in '[ 3] local 192.168.1.4 port 48736 connected
with 192.168.1.1 port 5001 (ct=1.84 ms)' shows the 3WHS took 1.84 milliseconds.
Little's Law in queueing theory is a theorem that determines the average number of items (L) in a stationary queuing system based on the average waiting time (W) of an item within a system and the average number of items arriving at the system per
unit of time (lambda). Mathematically, it's L = lambda * W. As used here, the units are bytes. The arrival rate is taken from the writes.
Network power: The network power (NetPwr) metric is experimental. It's a convenience function defined as throughput/delay. For TCP transmits, the delay is the sampled RTT times. For TCP receives, the delay is the write to read latency. For UDP
the delay is the end/end latency. Don't confuse this with the physics definition of power (delta energy/delta time) but more of a measure of a desirable property divided by an undesirable property. Also note, one must use -i interval with TCP to
get this as that's what sets the RTT sampling rate. The metric is scaled to assist with human readability.
Fast Sampling: Use ./configure --enable-fastsampling and then compile from source to enable four digit (e.g. 1.0000) precision in reports' timestamps. Useful for sub-millisecond sampling.