General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: "Eggert, Lars" <lars@netapp.com>
To: "Toke Høiland-Jørgensen" <toke@toke.dk>
Cc: "bloat-JXvr2/1DY2fm6VMwtOF2vx4hnT+Y9+D1@public.gmane.org"
	<public-bloat-JXvr2/1DY2fm6VMwtOF2vx4hnT+Y9+D1@plane.gmane.org>,
	"Eggert,
	Lars" <public-lars-HgOvQuBEEgTQT0dZR+AlfA@plane.gmane.org>
Subject: Re: [Bloat] latency measurement tools?
Date: Fri, 15 Feb 2013 17:11:33 +0000	[thread overview]
Message-ID: <D4D47BCFFE5A004F95D707546AC0D7E91F71A068@SACEXCMBX01-PRD.hq.netapp.com> (raw)
In-Reply-To: <87half8e9u.fsf@toke.dk>




Thanks, I'll check it out!

Lars

On Feb 14, 2013, at 0:56, Toke Høiland-Jørgensen <toke@toke.dk> wrote:

> "Eggert, Lars" <lars-HgOvQuBEEgTQT0dZR+AlfA@public.gmane.org> writes:
> 
>> what are the preferred measurement tools for latency under load? Are
>> any publicly available or is everyone basically rolling their own?
> 
> Hi Lars
> 
> Usually, people employ netperf and/or iperf to load up a link, and then
> do some sort of latency measurement while the link is loaded (through
> netperf or just straight ping). I'm the author of a python wrapper that
> handles this in a reproducible way, and which also has a prototype of
> the Realtime Response Under Load (RRUL) test incorporated. The wrapper
> is available here:
> 
> https://github.com/tohojo/netperf-wrapper
> 
> (Or just install it through pip with `pip install netperf-wrapper`).
> 
> Also see this page for some example test plots:
> 
> http://www.bufferbloat.net/projects/codel/wiki/RRUL_Rogues_Gallery
> 
> And this one for some general advise on benchmarking (though that is
> mostly relevant at lower bandwidths):
> 
> http://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel
> 
>> (I'm esp. interested in tools that can deal with 10 GbE and faster LAN
>> fabrics.)
> 
> The RRUL test prototype runs four simultaneous TCP streams in each
> direction which can generally load up pretty much any link on a modern
> Linux kernel (though I haven't tested it personally on high-speed
> networks due to lack of hardware).
> 
> Good luck with the benchmarking. :)
> 
> -Toke
> 
> -- 
> Toke Høiland-Jørgensen
> toke@toke.dk




      parent reply	other threads:[~2013-02-15 17:13 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-13 18:23 Eggert, Lars
     [not found] ` <87half8e9u.fsf@toke.dk>
2013-02-15 17:11   ` Eggert, Lars [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D4D47BCFFE5A004F95D707546AC0D7E91F71A068@SACEXCMBX01-PRD.hq.netapp.com \
    --to=lars@netapp.com \
    --cc=public-bloat-JXvr2/1DY2fm6VMwtOF2vx4hnT+Y9+D1@plane.gmane.org \
    --cc=public-lars-HgOvQuBEEgTQT0dZR+AlfA@plane.gmane.org \
    --cc=toke@toke.dk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox