Some results of the latency under load tool

Dave Täht d at taht.net
Sun Mar 20 17:50:41 EDT 2011


Jonathan Morton <chromatix99 at gmail.com> writes:

> Here are some numbers I got from a test over a switched 100base-TX LAN:
>
>     Upload Capacity: 1018 KiB/s
>   Download Capacity: 2181 KiB/s
> Link Responsiveness: 2 Hz
>     Flow Smoothness: 1 Hz
>
> Horrible, isn't it?  I deliberately left these machines with standard
> configurations in order to show that.

I successfully compiled and ran this on both an debian squeeze/arm based
platform (an openrd), and a 64 bit ubuntu 10.10. It generated some
warnings on 64 bit.

Run on my partially debloated wireless/wired network[1][2] via the the
openrd->nano-m->nano-m->davepc path (180Mbit wireless) over IPv6 (wow,
transparent IPv6! Thanks!), over IPv6, server=davepc

Scenario 1: 0 uploads, 1 downloads... 8024 KiB/s down, 12.71 Hz smoothness
Scenario 2: 1 uploads, 0 downloads... 6526 KiB/s up, 5.52 Hz smoothness
Scenario 3: 0 uploads, 2 downloads... 6703 KiB/s down, 19.07 Hz smoothness
Scenario 4: 1 uploads, 1 downloads... 3794 KiB/s up, 4084 KiB/s down, 1.82 Hz smoothness
Scenario 5: 2 uploads, 0 downloads... 7503 KiB/s up, 4.11 Hz smoothness
Scenario 6: 0 uploads, 3 downloads... 7298 KiB/s down, 2.08 Hz smoothness
Scenario 7: 1 uploads, 2 downloads... 4323 KiB/s up, 4690 KiB/s down, 1.38 Hz smoothness

My pure wireless and more debloated network is down right now[2]

I'm still watching it run other scenarios.

Comments/Suggestions:

A) From a "quicker results" perspective, do the heaviest network tests first,
then the lightest tests, then the rest? :)

B) Due to the dependence on the gnu scientific library it seems unlikely
this will build on openwrt. Also, if there is heavy math anywhere in
this app, it needs to be done outside the testing loop on a non FPU
enabled processor such as most arms to not skew the test.

It ate 17% of cpu on the arm, and ~8% of cpu on davepc.

Similarly there are rng's in the Linux kernel and hardware sources, I'm
not sure to what extent you need randomness (I looked at the code  only briefly)

C) Do you have this in a repository somewhere? (I use github). I would
certainly love to have more tools like this living together in
harmony. Needn't be github.

D) It's never been clear to me what resolution gettimeofday has on most
platforms, and the posix clock_gettime and the corresponding

clock_getres(clockid_t clk_id, struct timespec *res)

Can make that clear. CLOCK_REALTIME also discards any clock adjustments
in the interval for even higher potential accuracy. (On the interval
anyway, not perhaps against wall-clock time) Given the period of this
test, do you think that the precision can only be 1ms?

It's certainly my hope that most systems today implement posix timers.

I can produce a patch if you want.

Thanks for such an interesting tool!

-- 
Dave Taht
http://nex-6.taht.net

[1] The openrd has a very small txqueuelen (4) and dma-tx-queue of 20 on
the GigE card, which is connected at 100Mbit to the nano-m5s, which DOES
NOT have my ath9k patch or eBDP enabled but does have a small
txqueuelen, the laptop is running debloat-testing (txqueulen 4, dma tx
queue 64), connected via 100Mbit to the other nano-m5 radio.

tcp cubic throughout. ECN and SACK/DSACK are enabled but there is no
shaping in place.

The wireless radios inbetween DO tend to show bloat under worse
conditions than this test.

(I note that debloat-testings patches to the e1000 series ethernet
driver do not allow dma tx queues of less than 64 as yet)

[2] for my network diagram see:
 http://www.bufferbloat.net/projects/bloat/wiki/Experiment_-_Bloated_LAGN_vs_debloated_WNDR5700

This is an old test I need to repeat it with debloat testing.



More information about the Bloat-devel mailing list