From: Rick Jones <rick.jones2@hp.com>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Measuring latency-under-load consistently
Date: Fri, 11 Mar 2011 16:13:01 -0800 [thread overview]
Message-ID: <1299888781.2087.2055.camel@tardy> (raw)
In-Reply-To: <16808EAB-2F52-4D32-8A8C-2AE09CD4D103@gmail.com>
On Sat, 2011-03-12 at 02:00 +0200, Jonathan Morton wrote:
> I'm currently resurrecting my socket-programming skills (last used
> almost 10 years ago when IPv6 really *was* experimental) in the hope
> of making a usable latency-under-load tester. This could be run in
> server-mode on one host, and then as a client on another host could be
> pointed at the server, followed by several minutes of churning and
> some nice round numbers.
>
> It would need to make multiple TCP connections simultaneously, one of
> which would be used to measure latency (using NODELAY marked sockets),
> and one or more others used to load the network and measure goodput.
> It would automatically determine how long to run in order to get a
> reliable result that can't easily be challenged by (eg.) an ISP.
Why would it require multiple TCP connections? Only if none of the
connections have data flowing in the other direction, and your latency
measuring one would need that anyway.
Also, while NODELAY (TCP_NODELAY I presume) might be interesting with
something that tried to have multiple, sub-MSS transactions in flight at
one time, it won't change anything about how the packets flow on the
network - TCP_NODELAY has no effect beyond the TCP code running the
connection associated with the socket for that connection.
If you only ever have one transaction outstanding at a time, regardless
of size, if TCP_NODELAY improves the latency, it means the TCP stack is
broken wrt how to interpret the Nagle algorithm - likely as not it is
trying to apply it segment by segment rather than by user send by user
send.
> The output metrics would be:
>
> 1) Average goodput for uplink and downlink, for single flows and
> multiple flows, in binary megabytes per second. Just for laughs, I
> might also add the equivalent gigabytes-per-month figures.
>
> 2) Maximum latency (in the parallel "interactive" flow) under load,
> expressed in Hz rather than milliseconds. This gives a number that
> gets bigger for better performance, which is much easier for laymen to
> understand.
>
> 3) Flow smoothness, measured as the maximum time between sequential
> received data for any continuous flow, also expressed in Hz. This is
> an important metric for video and radio streaming, and one which CUBIC
> will probably do extremely badly at if there are large buffers in the
> path (without AQM or Blackpool).
>
> Any thoughts on this idea?
You may be able to get most of what you want with a top-of-trunk netperf
"burst mode" TCP_RR test. It isn't quite an exact match though.
The idea is to ./configure netperf for burst mode and histogram, and
probably the "omni" tests to get the more complete statistics on RTT
latency, then run a burst mode TCP_RR test - if doing "upload" then with
a large request size and a single byte response size. If doing
"download" then with a single byte request size and a large response
size.
If you want to hear more, contact me off-list and I can wax philosophic
on it.
rick jones
> - Jonathan
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
next prev parent reply other threads:[~2011-03-12 0:13 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-12 0:00 Jonathan Morton
2011-03-12 0:13 ` Rick Jones [this message]
2011-03-12 0:45 ` Jonathan Morton
2011-03-12 1:09 ` Rick Jones
2011-03-12 1:44 ` Jonathan Morton
2011-03-12 3:19 ` richard
2011-03-12 3:52 ` Jonathan Morton
2011-03-12 4:04 ` richard
2011-03-12 21:57 ` Jonathan Morton
2011-03-12 22:23 ` richard
2011-03-12 23:58 ` Jonathan Morton
2011-03-12 22:21 ` Fred Baker
2011-03-12 23:03 ` Jonathan Morton
2011-03-13 6:54 ` Fred Baker
2011-03-13 14:24 ` Jonathan Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1299888781.2087.2055.camel@tardy \
--to=rick.jones2@hp.com \
--cc=bloat@lists.bufferbloat.net \
--cc=chromatix99@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox