From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-14-ewr.dyndns.com (mxout-063-ewr.mailhop.org [216.146.33.63]) by lists.bufferbloat.net (Postfix) with ESMTP id 4744F2E00B9 for ; Wed, 16 Mar 2011 10:12:33 -0700 (PDT) Received: from scan-11-ewr.mailhop.org (scan-11-ewr.local [10.0.141.229]) by mail-14-ewr.dyndns.com (Postfix) with ESMTP id 496E49CD764 for ; Wed, 16 Mar 2011 17:12:33 +0000 (UTC) X-Spam-Score: -4.0 (----) X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 15.192.0.46 Received: from g5t0009.atlanta.hp.com (g5t0009.atlanta.hp.com [15.192.0.46]) by mail-14-ewr.dyndns.com (Postfix) with ESMTP id DA1F79CD70E for ; Wed, 16 Mar 2011 17:12:32 +0000 (UTC) Received: from g5t0030.atlanta.hp.com (g5t0030.atlanta.hp.com [16.228.8.142]) by g5t0009.atlanta.hp.com (Postfix) with ESMTP id 9E7E630BEC; Wed, 16 Mar 2011 17:12:32 +0000 (UTC) Received: from [16.89.244.213] (tardy.cup.hp.com [16.89.244.213]) by g5t0030.atlanta.hp.com (Postfix) with ESMTP id 4143C14237; Wed, 16 Mar 2011 17:12:32 +0000 (UTC) From: Rick Jones To: Jonathan Morton In-Reply-To: <0D59AD34-AA64-4376-BB8E-58C5D378F488@gmail.com> References: <0D59AD34-AA64-4376-BB8E-58C5D378F488@gmail.com> Content-Type: text/plain; charset="UTF-8" Date: Wed, 16 Mar 2011 10:12:25 -0700 Message-ID: <1300295545.2087.2254.camel@tardy> Mime-Version: 1.0 X-Mailer: Evolution 2.30.3 Content-Transfer-Encoding: 7bit Cc: bloat Subject: Re: [Bloat] Progress with latency-under-load tool X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list Reply-To: rick.jones2@hp.com List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Mar 2011 17:12:34 -0000 On Wed, 2011-03-16 at 04:31 +0200, Jonathan Morton wrote: > I'm finally getting a handle on sockets programming again - the API has > actually *changed* since I last used it - so the testing tool I was > talking about is starting to actually do semi-useful things. > > For example, just over the localhost "connection", maxRTT is already > almost 2ms with no load. That's CPU scheduling latency. The tool > prints this out as "Link Responsiveness: 556", since it displays Hz in > order to give marketing-friendly "bigger is better" numbers. > > Now to write a couple of lovely functions called "spew" and "chug". > I'll let you guess what they do... FWIW, the output of two netperfs - one running RR the other "spewing" (running STREAM). Folks can probably guess when the STREAM test started and ended: raj@tardy:~/netperf2_trunk$ src/netperf -t omni -D 1 -H s7 -l 30 -- -d rr OMNI TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s7.cup.hp.com (16.89.132.27) port 0 AF_INET : demo Interim result: 3757.61 Trans/s over 1.00 seconds Interim result: 3769.75 Trans/s over 1.00 seconds Interim result: 3724.28 Trans/s over 1.01 seconds Interim result: 3770.74 Trans/s over 1.00 seconds Interim result: 3761.83 Trans/s over 1.00 seconds Interim result: 3757.62 Trans/s over 1.00 seconds Interim result: 3729.01 Trans/s over 1.01 seconds Interim result: 3752.77 Trans/s over 1.00 seconds Interim result: 3751.26 Trans/s over 1.00 seconds Interim result: 2806.23 Trans/s over 1.34 seconds Interim result: 903.67 Trans/s over 3.11 seconds Interim result: 965.98 Trans/s over 1.03 seconds Interim result: 729.16 Trans/s over 1.32 seconds Interim result: 948.56 Trans/s over 1.00 seconds Interim result: 678.77 Trans/s over 1.40 seconds Interim result: 913.86 Trans/s over 1.00 seconds Interim result: 1622.61 Trans/s over 1.00 seconds Interim result: 3820.19 Trans/s over 1.00 seconds Interim result: 3794.34 Trans/s over 1.01 seconds Interim result: 3803.88 Trans/s over 1.00 seconds Interim result: 3807.65 Trans/s over 1.00 seconds Interim result: 3780.38 Trans/s over 1.01 seconds Interim result: 3785.74 Trans/s over 1.00 seconds Interim result: 3773.94 Trans/s over 1.00 seconds Interim result: 3727.44 Trans/s over 1.01 seconds Interim result: 3743.35 Trans/s over 1.00 seconds Local Local Remote Remote Request Response Elapsed Throughput Throughput Send Socket Recv Socket Recv Socket Send Socket Size Size Time Units Size Size Size Size Bytes Bytes (sec) Final Final Final Final 16384 87380 87380 16384 1 1 30.00 2792.25 Trans/s raj@tardy:~/netperf2_trunk$ src/netperf -t omni -H s7 -D 1 OMNI TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to s7.cup.hp.com (16.89.132.27) port 0 AF_INET : demo Interim result: 779.29 10^6bits/s over 1.01 seconds Interim result: 739.22 10^6bits/s over 1.05 seconds Interim result: 773.96 10^6bits/s over 1.00 seconds Interim result: 771.44 10^6bits/s over 1.00 seconds Interim result: 803.50 10^6bits/s over 1.00 seconds Interim result: 807.18 10^6bits/s over 1.00 seconds Interim result: 820.04 10^6bits/s over 1.00 seconds Interim result: 831.19 10^6bits/s over 1.00 seconds Interim result: 800.17 10^6bits/s over 1.04 seconds Local Local Local Elapsed Throughput Throughput Send Socket Send Socket Send Time Units Size Size Size (sec) Final Final 646400 646400 16384 10.00 793.36 10^6bits/s happy benchmarking, rick jones I'd never flog a dead horse :) As for why it wasn't always interim results every second, that stems from the algorithm being used to decide when to emit interim results when ./configured with --enable-demo and -D is used - it tries to avoid making a gettimeofday() call in front of every socket call, and so guesses how many "operations" will complete in the demo interval. Big drops in the number of operations per interval will have those spikes of longer intervals. > - Jonathan > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat