[Bloat] Reasons to prefer netperf vs iperf?

Dave Taht dave.taht at gmail.com
Sun Dec 4 12:13:25 EST 2016


On Sun, Dec 4, 2016 at 5:40 AM, Rich Brown <richb.hanover at gmail.com> wrote:
> As I browse the web, I see several sets of performance measurement using either netperf or iperf, and never know if either offers an advantage.
>
> I know Flent uses netperf by default: what are the reason(s) for selecting it? Thanks.

* Netperf

+ netperf is the preferred network stress tool of the linux kernel devs.
+ the maintainer is responsive and capable
+ the code is very fast with nearly no compromises on speed or accuracy
   we've successfully used it to 40GigE
+ the code is also very portable
+ one explicitly versioned version. When you use netperf, you know you
are using netperf.

- netperf has a pre-OSI (1993) license which makes for default
inclusion in debian impossible and elsewhere sometimes dicy
- netperf does not have a way to send timestamps within flows
- it is very hard to add new tests to netperf
- it's test negotiation protocol is less than fully documented and can
break between releases (and I'm being kind here)
- it could use better real-time support

* iperf
+ More widely available
- "Academic" code, often with papers not citing the specific version used
- I have generally not trusted the results published either - but
aaron finding that major bug in iperf's udp measurements explains a
LOT of that. I think.
- Has, like, 3-8 non-interoperable versions.
- is available in java, for example

there *might* be an iperf version worth adopting but I have no idea
which one it would be.

I started speccing out a flent specific netperf/iperf replacement
*years* ago,  (twd), but the enormous amount of money/effort required
to do it right caused me to dump the project. Also, at the time
(because of the need for reliable high speed measurements AND for
measurements on obscure, weak, cpus) my preferred language was going
to be C, and that too raised the time/money metric into the
stratosphere.

I had some hope of leveraging owamp one day, but I have more hope now
of leveraging the rapidly maturing infrastructure around go, http2,
and quic.

there are other tools (ndt for example), but getting past that high
speed and weird low end cpu requirement were showstoppers, and remains
so.

I've been fiddling with esr's new "loccount" tool - both to teach
myself go, and deeply understand the (deeply flawed) COCOMO software
development model, and according to it, replacing netperf would cost:

dave at nemesis:~/git/netperf$ loccount -c .
all            50968 (100.00%) in 112 files
c              41739 (81.89%) in 48 files
shell           5125 (10.06%) in 22 files
python          2376 (4.66%) in 5 files
m4               895 (1.76%) in 11 files
autotools        767 (1.50%) in 9 files
awk               66 (0.13%) in 2 files
Total Physical Source Lines of Code (SLOC)                = 50968
Development Effort Estimate, Person-Years (Person-Months) = 12.41 (148.89)
 (Basic COCOMO model, Person-Months = 2.40 * (KSLOC**1.05))
Schedule Estimate, Years (Months)                         = 1.39 (16.74)
 (Basic COCOMO model, Months = 2.50 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule)  = 8.90
Total Estimated Cost to Develop                           = $1798148
 (average salary = $60384/year, overhead = 2.40).

...

loccount is a remarkable improvement in speed over "sloccount" (aside
from I/O the code is "embarrassingly parallel" and scales beautifully
as a function of the number of cores), and has thus far been quite
useful for me finally beginning to grok go.

Get it at:

git clone https://gitlab.com/esr/loccount

And the effort in actually understanding the COCOMO model I hope will
one day pay off by trying to come up with a model that more accurately
models theory costs, development time, maintenence and refactoring
costs.

(that said, if anyone out there is aware of the state of the art in
this - and has code ), I'd appreciate it. What I'd wanted to do was
begin to leverage the oft-published lwn stats on kernel development
(and churn) and try to see what that costs.

(I'm mostly amusing myself by applying age old techniques to go to
speed up loccount, rather than worrying about the model. Everybody
needs to just relax and do something like this once in a while)




> Rich
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org


More information about the Bloat mailing list