[Bloat] Progress with latency-under-load tool
Rick Jones
rick.jones2 at hp.com
Wed Mar 16 13:12:25 EDT 2011
On Wed, 2011-03-16 at 04:31 +0200, Jonathan Morton wrote:
> I'm finally getting a handle on sockets programming again - the API has
> actually *changed* since I last used it - so the testing tool I was
> talking about is starting to actually do semi-useful things.
>
> For example, just over the localhost "connection", maxRTT is already
> almost 2ms with no load. That's CPU scheduling latency. The tool
> prints this out as "Link Responsiveness: 556", since it displays Hz in
> order to give marketing-friendly "bigger is better" numbers.
>
> Now to write a couple of lovely functions called "spew" and "chug".
> I'll let you guess what they do...
FWIW, the output of two netperfs - one running RR the other
"spewing" (running STREAM). Folks can probably guess when the STREAM
test started and ended:
raj at tardy:~/netperf2_trunk$ src/netperf -t omni -D 1 -H s7 -l 30 -- -d
rr
OMNI TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s7.cup.hp.com
(16.89.132.27) port 0 AF_INET : demo
Interim result: 3757.61 Trans/s over 1.00 seconds
Interim result: 3769.75 Trans/s over 1.00 seconds
Interim result: 3724.28 Trans/s over 1.01 seconds
Interim result: 3770.74 Trans/s over 1.00 seconds
Interim result: 3761.83 Trans/s over 1.00 seconds
Interim result: 3757.62 Trans/s over 1.00 seconds
Interim result: 3729.01 Trans/s over 1.01 seconds
Interim result: 3752.77 Trans/s over 1.00 seconds
Interim result: 3751.26 Trans/s over 1.00 seconds
Interim result: 2806.23 Trans/s over 1.34 seconds
Interim result: 903.67 Trans/s over 3.11 seconds
Interim result: 965.98 Trans/s over 1.03 seconds
Interim result: 729.16 Trans/s over 1.32 seconds
Interim result: 948.56 Trans/s over 1.00 seconds
Interim result: 678.77 Trans/s over 1.40 seconds
Interim result: 913.86 Trans/s over 1.00 seconds
Interim result: 1622.61 Trans/s over 1.00 seconds
Interim result: 3820.19 Trans/s over 1.00 seconds
Interim result: 3794.34 Trans/s over 1.01 seconds
Interim result: 3803.88 Trans/s over 1.00 seconds
Interim result: 3807.65 Trans/s over 1.00 seconds
Interim result: 3780.38 Trans/s over 1.01 seconds
Interim result: 3785.74 Trans/s over 1.00 seconds
Interim result: 3773.94 Trans/s over 1.00 seconds
Interim result: 3727.44 Trans/s over 1.01 seconds
Interim result: 3743.35 Trans/s over 1.00 seconds
Local Local Remote Remote Request Response Elapsed
Throughput Throughput
Send Socket Recv Socket Recv Socket Send Socket Size Size Time
Units
Size Size Size Size Bytes Bytes
(sec)
Final Final Final
Final
16384 87380 87380 16384 1 1 30.00
2792.25 Trans/s
raj at tardy:~/netperf2_trunk$ src/netperf -t omni -H s7 -D 1
OMNI TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s7.cup.hp.com
(16.89.132.27) port 0 AF_INET : demo
Interim result: 779.29 10^6bits/s over 1.01 seconds
Interim result: 739.22 10^6bits/s over 1.05 seconds
Interim result: 773.96 10^6bits/s over 1.00 seconds
Interim result: 771.44 10^6bits/s over 1.00 seconds
Interim result: 803.50 10^6bits/s over 1.00 seconds
Interim result: 807.18 10^6bits/s over 1.00 seconds
Interim result: 820.04 10^6bits/s over 1.00 seconds
Interim result: 831.19 10^6bits/s over 1.00 seconds
Interim result: 800.17 10^6bits/s over 1.04 seconds
Local Local Local Elapsed Throughput Throughput
Send Socket Send Socket Send Time Units
Size Size Size (sec)
Final Final
646400 646400 16384 10.00 793.36 10^6bits/s
happy benchmarking,
rick jones
I'd never flog a dead horse :)
As for why it wasn't always interim results every second, that stems
from the algorithm being used to decide when to emit interim results
when ./configured with --enable-demo and -D is used - it tries to avoid
making a gettimeofday() call in front of every socket call, and so
guesses how many "operations" will complete in the demo interval. Big
drops in the number of operations per interval will have those spikes of
longer intervals.
> - Jonathan
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
More information about the Bloat
mailing list