From: Jonathan Morton <chromatix99@gmail.com>
To: rick.jones2@hp.com
Cc: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Measuring latency-under-load consistently
Date: Sat, 12 Mar 2011 02:45:45 +0200 [thread overview]
Message-ID: <50E37E64-A21F-4503-83D8-DE81AA65F0C2@gmail.com> (raw)
In-Reply-To: <1299888781.2087.2055.camel@tardy>
On 12 Mar, 2011, at 2:13 am, Rick Jones wrote:
> On Sat, 2011-03-12 at 02:00 +0200, Jonathan Morton wrote:
>> I'm currently resurrecting my socket-programming skills (last used
>> almost 10 years ago when IPv6 really *was* experimental) in the hope
>> of making a usable latency-under-load tester. This could be run in
>> server-mode on one host, and then as a client on another host could be
>> pointed at the server, followed by several minutes of churning and
>> some nice round numbers.
>>
>> It would need to make multiple TCP connections simultaneously, one of
>> which would be used to measure latency (using NODELAY marked sockets),
>> and one or more others used to load the network and measure goodput.
>> It would automatically determine how long to run in order to get a
>> reliable result that can't easily be challenged by (eg.) an ISP.
>
> Why would it require multiple TCP connections? Only if none of the
> connections have data flowing in the other direction, and your latency
> measuring one would need that anyway.
Because the latency within a bulk flow is not as interesting as the latency experienced by interactive or realtime flows sharing the same link as a bulk flow (or three). In the presence of a re-ordering AQM scheme (trivially, SFQ) the two are not the same.
Suppose for example that you're downloading the latest Ubuntu DVD and you suddenly think of something to look up on Wikipedia. With the 30-second latencies I have personally experienced on some non-AQM links under load, that is intolerably slow. With something as simple as SFQ on that same queue it would be considerably better because the new packets could bypass the queue associated with the old flow, but measuring only the old flow wouldn't show that.
Note that the occasional packet losses on a plain SFQ drop-tail queue would still show extremely long maximum inter-arrival delays on the bulk flow, and this is captured by the third metric (flow smoothness).
> Also, while NODELAY (TCP_NODELAY I presume) might be interesting with
> something that tried to have multiple, sub-MSS transactions in flight at
> one time, it won't change anything about how the packets flow on the
> network - TCP_NODELAY has no effect beyond the TCP code running the
> connection associated with the socket for that connection.
I'm essentially going to be running a back-and-forth ping inside a TCP session. Nagle's algorithm can, if it glitches, add hundreds of milliseconds to that, which can be very material - eg. when measuring a LAN or Wifi. I wouldn't set NODELAY on the bulk flows.
Why ping inside a TCP session, rather than ICMP? Because I want to bypass any specific optimisations for ICMP and measure what applications can actually use.
> You may be able to get most of what you want with a top-of-trunk netperf
> "burst mode" TCP_RR test. It isn't quite an exact match though.
I don't really see how that would get the measurement I want.
- Jonathan
next prev parent reply other threads:[~2011-03-12 0:45 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-12 0:00 Jonathan Morton
2011-03-12 0:13 ` Rick Jones
2011-03-12 0:45 ` Jonathan Morton [this message]
2011-03-12 1:09 ` Rick Jones
2011-03-12 1:44 ` Jonathan Morton
2011-03-12 3:19 ` richard
2011-03-12 3:52 ` Jonathan Morton
2011-03-12 4:04 ` richard
2011-03-12 21:57 ` Jonathan Morton
2011-03-12 22:23 ` richard
2011-03-12 23:58 ` Jonathan Morton
2011-03-12 22:21 ` Fred Baker
2011-03-12 23:03 ` Jonathan Morton
2011-03-13 6:54 ` Fred Baker
2011-03-13 14:24 ` Jonathan Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50E37E64-A21F-4503-83D8-DE81AA65F0C2@gmail.com \
--to=chromatix99@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=rick.jones2@hp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox