From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-iy0-f171.google.com (mail-iy0-f171.google.com [209.85.210.171]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id B4D312002E7 for ; Sun, 8 Jan 2012 12:27:36 -0800 (PST) Received: by iagw33 with SMTP id w33so8598688iag.16 for ; Sun, 08 Jan 2012 12:27:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=Pi12SvSfpsRGRWw7npZfEmpL0gJzOehq67b3wRTRNDs=; b=sRqm9aPj43tdFJZonzUR7tmXBISvHCDaD1fIYtvmAWrXrVHA+C0rqqO/CvFINIaisO 8MdC4joZH1dfWwa0FA3idE6msKPgnCQ3Jone+vGRgO3mkWsqQ5g5GeVQGEhgTrVCwzCR XeA+sV3hldxmeZIUnBwfT6ywe79JyQ5o+tXp0= MIME-Version: 1.0 Received: by 10.50.189.199 with SMTP id gk7mr6706955igc.30.1326054455948; Sun, 08 Jan 2012 12:27:35 -0800 (PST) Received: by 10.231.159.193 with HTTP; Sun, 8 Jan 2012 12:27:35 -0800 (PST) Date: Sun, 8 Jan 2012 21:27:35 +0100 Message-ID: From: Dave Taht To: bloat Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: [Bloat] calculating baseline latency properly? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Jan 2012 20:27:36 -0000 I have been fiddling with the latest SFQ patches from eric at 100Mbit. I'm at the point where what are effectively "quantum effects" are bothering me, statistically. At 100Mbit: I get a baseline latency for ping RTT on my hardware in the range .22 to .4= 8 ms, call it .32 as an average. Eric gets about 1/3 that, and after testing it looks like the majority of the ping latency comes from the cerowrt box for reasons unk= nown. There are various tunables for ethtool that have a small effect on the e100= 0e side, but not on that side. Under a 50 iperf load + sfq, on 100Mbit ethernet, the simultaneous 10ms ping RTTs vary thusly: BQL =3D auto. RTT =3D ~ 2.16 ms BQL =3D 4500 bytes =3D ~1.2 ms BQL =3D 3000 bytes =3D ~ .67 ms BQL =3D 1500 bytes =3D ~.76 ms I note that these are pretty variable and looking at cdf graphs makes the most sense over a large sample size, rather than the average. It also helps when trying to compare sfq vs qfq vs sfqred. For comparison, PFIFO_FAST (with a txqueuelen of 1000 on both sides), I get a latency under the same workload of 121ms at all settings for BQL. (I mean, probably seeing the fractional stuff but it just doesn't matter) In measuring ping RTT rather than arrival time elsewhere (which I plan to do with RTP at some point), there's actually two variables in play send and return time... So should a RTT latency under load calculation remove the baseline latency thusly: latency_improvement =3D (ping_RTT - baseline_ping_rtt) / (new_ping_RTT - baseline_ping_rtt) factor 344 improvement OR keep it: latency_improvement =3D (ping_RTT) / (new_ping_RTT) factor 180 improvement... or would there be another way to compensate for it that made sense? Either way the numbers look grand, and I plan to play with both a real 10Mbit connection and a simulated 4Mbit one, next. Should be interesting... --=20 Dave T=E4ht SKYPE: davetaht US Tel: 1-239-829-5608 FR Tel: 0638645374 http://www.bufferbloat.net