[Bloat] Measuring latency-under-load consistently

Jonathan Morton chromatix99 at gmail.com
Fri Mar 11 17:44:05 PST 2011


On 12 Mar, 2011, at 3:09 am, Rick Jones wrote:

>>> You may be able to get most of what you want with a top-of-trunk netperf
>>> "burst mode" TCP_RR test. It isn't quite an exact match though.
>> 
>> I don't really see how that would get the measurement I want.
> 
> Then one TCP_STREAM (or TCP_MAERTS) and one TCP_RR with all the RTT
> stats and histogram enabled :)

Closer, but that doesn't seem to be self-tuning - and often one bulk flow doesn't fill the pipe any more.

I want to scale up to FTTH and GigE, and down to V.34bis and GPRS - all of which are (still) relevant network technologies today - without any error-prone parameters to enter except the identity of the server.

Ideally it should be robust enough for use by random ISP technicians, regulatory officials and end-users, and the output should be accordingly simple to interpret.

So yes, code reuse is great, but only if it does what I need it to.  I need the practice anyway.  :-)

 - Jonathan



More information about the Bloat mailing list