From: Aaron Wood <woody77@gmail.com>
To: Dave Taht <dave.taht@gmail.com>
Cc: Rich Brown <richb.hanover@gmail.com>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Reasons to prefer netperf vs iperf?
Date: Sun, 4 Dec 2016 10:12:28 -0800 [thread overview]
Message-ID: <CALQXh-PFjoHjc-2+JnNR6TwR30yNt01454XA3s7ptCEupEY_VQ@mail.gmail.com> (raw)
In-Reply-To: <CAA93jw4bnTWMjkVGwHjxJkK6w3aHKKNZsDuO0Wc8TGwwX0CEJQ@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 3840 bytes --]
On Sun, Dec 4, 2016 at 9:13 AM, Dave Taht <dave.taht@gmail.com> wrote:
> On Sun, Dec 4, 2016 at 5:40 AM, Rich Brown <richb.hanover@gmail.com>
> wrote:
> > As I browse the web, I see several sets of performance measurement using
> either netperf or iperf, and never know if either offers an advantage.
> >
> > I know Flent uses netperf by default: what are the reason(s) for
> selecting it? Thanks
*netperf
+supports multiple tests in parallel on the same server
> * iperf
> + More widely available
>
Sort of... Given the variants, less so. But iperf3 is coded to be pretty
portable, and so it's pretty widely available
It has a pretty good JSON format for the client results, but the server
results are returned in plain text. And it doesn't report anything
finer-grained than 100ms.
- I have generally not trusted the results published either - but
> aaron finding that major bug in iperf's udp measurements explains a
> LOT of that. I think.
>
I've found something else with it, that I need to write up: with UDP the
application-layer pacing and the fq socket pacing cause it to report a lot
of invalid packet loss. The application pacing is focused on layer-6
good-put. But the fq pacing appears to be enforcing wire-rates (or
calculated ethernet rates), and so with small packets (64 byte payloads),
it's typical to see 40% packet loss (as the fq layer discards the UDP
frames to cut say 100Mbps of application layer down to 100Mpbs of wire
rate). I need to actually do the tcpdump analysis of that and get it
written up.
> - Has, like, 3-8 non-interoperable versions.
> - is available in java, for example
>
> there *might* be an iperf version worth adopting but I have no idea
> which one it would be.
>
Part of the issue with iPerf is that there are two main variants: iperf
and iperf3. iperf3 is currently maintained by the ESNET folks, and their
use-case is wildly different from ours:
- Very high bandwidth (>=100Gbps)
- Latency insensitive (long-running bulk data transfers)
- private networks (jumbo frames are an assumed use)
I'm also happy to take the fork of it that I have (
https://github.com/woody77/iperf) and make that tuned for our uses. There
are certain aspects that I wouldn't want to dive into changing at the
moment (like the single-threaded nature of the server). But I can easily
bang on the corners and get it's defaults better suited for our uses, and
make it behave better in the face of running without offloads. On my test
boxes, it starts to get I/O limited around 4-5 Gbps when using 1400-byte
UDP payloads. With TCP and TSO, it's merrily runs at 9.4Gbps of good-put
over a 10Gbps NIC.
But the application and kernel-level "packet rate" is really quite
different at that point. By default it's dropping 128KB blocks into the
TCP send buffer, and letting the TCP stack and offloads do their thing.
At the higher-performance end of things, I think it would benefit from
using sendmmsg()/recvmmsg() on platforms that support them. I think that
would let it better work with fq pacing at rates of 5Gbps and up.
I started speccing out a flent specific netperf/iperf replacement
> *years* ago, (twd), but the enormous amount of money/effort required
> to do it right caused me to dump the project. Also, at the time
> (because of the need for reliable high speed measurements AND for
> measurements on obscure, weak, cpus) my preferred language was going
> to be C, and that too raised the time/money metric into the
> stratosphere.
iperf3's internal structure might be useful for bootstrapping a project
like that. It already has all the application logic infrastructure, and
has a central timer heap (which can be made more efficient/exact), and it
already has a notion of different kinds of workloads. It wouldn't be too
much work to make its tests more modular.
-Aaron
[-- Attachment #2: Type: text/html, Size: 5503 bytes --]
prev parent reply other threads:[~2016-12-04 18:12 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-04 13:40 Rich Brown
2016-12-04 17:13 ` Dave Taht
2016-12-04 18:12 ` Aaron Wood [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALQXh-PFjoHjc-2+JnNR6TwR30yNt01454XA3s7ptCEupEY_VQ@mail.gmail.com \
--to=woody77@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=dave.taht@gmail.com \
--cc=richb.hanover@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox