[Bloat] Measuring latency-under-load consistently

Jonathan Morton chromatix99 at gmail.com
Fri Mar 11 19:00:25 EST 2011


I'm currently resurrecting my socket-programming skills (last used almost 10 years ago when IPv6 really *was* experimental) in the hope of making a usable latency-under-load tester.  This could be run in server-mode on one host, and then as a client on another host could be pointed at the server, followed by several minutes of churning and some nice round numbers.

It would need to make multiple TCP connections simultaneously, one of which would be used to measure latency (using NODELAY marked sockets), and one or more others used to load the network and measure goodput.  It would automatically determine how long to run in order to get a reliable result that can't easily be challenged by (eg.) an ISP.

The output metrics would be:

1) Average goodput for uplink and downlink, for single flows and multiple flows, in binary megabytes per second.  Just for laughs, I might also add the equivalent gigabytes-per-month figures.

2) Maximum latency (in the parallel "interactive" flow) under load, expressed in Hz rather than milliseconds.  This gives a number that gets bigger for better performance, which is much easier for laymen to understand.

3) Flow smoothness, measured as the maximum time between sequential received data for any continuous flow, also expressed in Hz.  This is an important metric for video and radio streaming, and one which CUBIC will probably do extremely badly at if there are large buffers in the path (without AQM or Blackpool).

Any thoughts on this idea?

 - Jonathan




More information about the Bloat mailing list