From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from g5t0007.atlanta.hp.com (g5t0007.atlanta.hp.com [15.192.0.44]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smtp.hp.com", Issuer "VeriSign Class 3 Secure Server CA - G3" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 8932E20064E for ; Wed, 25 Apr 2012 13:40:02 -0700 (PDT) Received: from g5t0029.atlanta.hp.com (g5t0029.atlanta.hp.com [16.228.8.141]) by g5t0007.atlanta.hp.com (Postfix) with ESMTP id 57592142AA for ; Wed, 25 Apr 2012 20:40:00 +0000 (UTC) Received: from [16.89.64.213] (tardy.cup.hp.com [16.89.64.213]) by g5t0029.atlanta.hp.com (Postfix) with ESMTP id 505B4206E7 for ; Wed, 25 Apr 2012 20:40:00 +0000 (UTC) Message-ID: <4F98611F.8010900@hp.com> Date: Wed, 25 Apr 2012 13:39:59 -0700 From: Rick Jones User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.28) Gecko/20120313 Lightning/1.0b2 Thunderbird/3.1.20 MIME-Version: 1.0 To: codel@lists.bufferbloat.net Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Codel] Recent top-of-trunk netperf change X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Apr 2012 20:40:02 -0000 On the off chance that netperf gets used as part of evaluating CoDel, I've enhanced the "demo mode" functionality to respond a bit more quickly to drops in available bandwidth. Just checked-in the change to the top-of-trunk at http://www.netperf.org/svn/netperf2/trunk . Normally, when the user tells netperf she wants output every N time units, netperf calculates and updates an estimate for how many work units (bytes transferred, transactions) would take place in that length of time. It then checks the elapsed time after that many units of work have been performed to see if it is time to emit an interim result. This goes back to the days when a gettimeofday() call was relatively expensive (and it can remain so in some places, IIRC). Now, if the demo interval is expressed as a negative number rather than a positive one, netperf will check elapsed time after each unit of work is performed (send and/or recv call). "Why not just the interval timer?" Because netperf must also run under Windows :) And the granularity may not be what the user desires. And it also means less worry about handling signals. happy benchmarking, rick jones BTW, it would appear that one can add an HTB qdisc to an interface, and then change the speed at will. There may still be some small quantity of disruption (need to check the packet traces) but it behaves much more nicely than causing an Ethernet interface to re-negotiate speed... and allows a much broader range of speeds as well. For example, a speed change from 500 to 5 Mbit/s via HTB with the legacy demo mode: raj@tardy:~/netperf2_trunk$ src/netperf -D 1 -H 192.168.1.3 -l 0 -- -m 1K MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.3 () port 0 AF_INET : demo Interim result: 510.28 10^6bits/s over 1.533 seconds ending at 1335385826.970 Interim result: 508.96 10^6bits/s over 1.003 seconds ending at 1335385827.972 Interim result: 508.75 10^6bits/s over 1.001 seconds ending at 1335385828.974 Interim result: 509.28 10^6bits/s over 1.001 seconds ending at 1335385829.975 Interim result: 508.92 10^6bits/s over 1.001 seconds ending at 1335385830.975 Interim result: 6.54 10^6bits/s over 77.768 seconds ending at 1335385908.743 Interim result: 4.74 10^6bits/s over 1.382 seconds ending at 1335385910.125 Interim result: 4.97 10^6bits/s over 1.126 seconds ending at 1335385911.251 Interim result: 4.92 10^6bits/s over 1.126 seconds ending at 1335385912.378 Interim result: 5.53 10^6bits/s over 1.024 seconds ending at 1335385913.401 Interim result: 4.50 10^6bits/s over 1.229 seconds ending at 1335385914.630 ^CRecv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 1024 90.11 37.31 And now with the new mode: raj@tardy:~/netperf2_trunk$ src/netperf -D -1 -H 192.168.1.3 -l 0 -- -m 1K MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.3 () port 0 AF_INET : demo Interim result: 503.91 10^6bits/s over 1.000 seconds ending at 1335386002.740 Interim result: 509.18 10^6bits/s over 1.001 seconds ending at 1335386003.741 Interim result: 509.35 10^6bits/s over 1.001 seconds ending at 1335386004.742 Interim result: 507.00 10^6bits/s over 1.001 seconds ending at 1335386005.743 Interim result: 509.35 10^6bits/s over 1.000 seconds ending at 1335386006.743 Interim result: 298.03 10^6bits/s over 1.012 seconds ending at 1335386007.755 Interim result: 4.97 10^6bits/s over 1.126 seconds ending at 1335386008.882 Interim result: 4.98 10^6bits/s over 1.127 seconds ending at 1335386010.008 Interim result: 4.98 10^6bits/s over 1.126 seconds ending at 1335386011.135 Interim result: 4.98 10^6bits/s over 1.177 seconds ending at 1335386012.312 Interim result: 4.98 10^6bits/s over 1.075 seconds ending at 1335386013.387 ^CRecv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 1024 12.26 234.27 This is what an advertise change from 1000 to 10 looks like, under the new netperf mechanism: raj@tardy:~/netperf2_trunk$ src/netperf -D -1 -H 192.168.1.3 -l 0 -- -m 1K MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.3 () port 0 AF_INET : demo Interim result: 941.69 10^6bits/s over 1.001 seconds ending at 1335386264.055 Interim result: 941.50 10^6bits/s over 1.000 seconds ending at 1335386265.055 Interim result: 941.66 10^6bits/s over 1.000 seconds ending at 1335386266.055 Interim result: 941.47 10^6bits/s over 1.000 seconds ending at 1335386267.055 Interim result: 939.56 10^6bits/s over 1.000 seconds ending at 1335386268.055 Interim result: 84.82 10^6bits/s over 7.253 seconds ending at 1335386275.309 Interim result: 9.41 10^6bits/s over 1.056 seconds ending at 1335386276.364 Interim result: 9.41 10^6bits/s over 1.056 seconds ending at 1335386277.420 Interim result: 9.42 10^6bits/s over 1.056 seconds ending at 1335386278.476 Interim result: 9.41 10^6bits/s over 1.056 seconds ending at 1335386279.531 Interim result: 9.41 10^6bits/s over 1.056 seconds ending at 1335386280.587 Interim result: 9.42 10^6bits/s over 1.056 seconds ending at 1335386281.643 ^CRecv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 1024 19.38 277.95 basically the link was down for a number of seconds, which forced TCP to back-off even more