From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from g1t0027.austin.hp.com (g1t0027.austin.hp.com [15.216.28.34]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "smtp.hp.com", Issuer "VeriSign Class 3 Secure Server CA - G3" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 1A2A0208AB4; Tue, 6 Nov 2012 10:14:56 -0800 (PST) Received: from g1t0039.austin.hp.com (g1t0039.austin.hp.com [16.236.32.45]) by g1t0027.austin.hp.com (Postfix) with ESMTP id 834BD38076; Tue, 6 Nov 2012 18:14:54 +0000 (UTC) Received: from [16.103.148.51] (tardy.usa.hp.com [16.103.148.51]) by g1t0039.austin.hp.com (Postfix) with ESMTP id 1B4BE340BC; Tue, 6 Nov 2012 18:14:54 +0000 (UTC) Message-ID: <5099539D.8060408@hp.com> Date: Tue, 06 Nov 2012 10:14:53 -0800 From: Rick Jones User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:16.0) Gecko/20121011 Thunderbird/16.0.1 MIME-Version: 1.0 To: Dave Taht References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: codel@lists.bufferbloat.net, cerowrt-devel@lists.bufferbloat.net, bloat Subject: Re: [Bloat] [Codel] RFC: Realtime Response Under Load (rrul) test specification X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Nov 2012 18:14:56 -0000 On 11/06/2012 04:42 AM, Dave Taht wrote: > I have been working on developing a specification for testing networks > more effectively for various side effects of bufferbloat, notably > gaming and voip performance, and especially web performance.... as > well as a few other things that concerned me, such as IPv6 behavior, > and the effects of packet classification. > > A key goal is to be able to measure the quality of the user experience > while a network is otherwise busy, with complex stuff going on in the > background, but with a simple presentation of the results in the end, > in under 60 seconds. Would you like fries with that? Snark aside, I think that being able to capture the state of the user experience in only 60 seconds is daunting at best. Especially if this testing is going to run over the Big Bad Internet (tm) rather than in a controlled test lab. > While it's not done yet, it escaped into the wild today, and I might > as well solicit wider opinions on it, sooo... get the spec at: > > https://github.com/dtaht/deBloat/blob/master/spec/rrule.doc?raw=true Github is serving that up as a plain text file, which then has Firefox looking to use gedit to look at the file, and gedit does not seem at all happy with it. It was necessary to download the file and open it "manually" in LibreOffice. > MUST run long enough to defeat bursty bandwidth optimizations such as > PowerBoost and discard data from that interval. I'll willingly display my ignorance, but for how long does PowerBoost and its cousins boost bandwidth? I wasn't looking for PowerBoost, and given the thing being examined I wasn't seeing that, but recently when I was evaluating the network performance of something "out there" in the cloud (not my home cloud as it were though) I noticed performance spikes repeating at intervals which would require > 60 seconds to "defeat" > MUST track and sum bi-directional throughput, using estimates for ACK > sizes of ipv4, ipv6, and encapsulated ipv6 packets, udp and tcp_rr > packets, etc. Estimating the bandwidth consumed by ACKs and/or protocol headers, using code operating at user-space, is going to be guessing. Particularly portable user-space. While those things may indeed affect the user's experience, the user doesn't particularly care about ACKs or header sizes. She cares how well the page loads or the call sounds. > MUST have the test server(s) within 80ms of the testing client Why? Perhaps there is something stating that some number of nines worth of things being accessed are within 80ms of the user. If there is, that should be given in support of the requirement. > This portion of the test will take your favorite website as a target > and show you how much it will slow down, under load. Under load on the website itself, or under load on one's link. I ass-u-me the latter, but that should be made clear. And while the chances of the additional load on a web site via this testing is likely epsilon, there is still the matter of its "optics" if you will - how it looks. Particularly if there is going to be something distributed with a default website coded into it. Further, websites are not going to remain static, so there will be the matter of being able to compare results over time. Perhaps that can be finessed with the "unloaded" (again I assume relative to the link of interest/test) measurement. rick jones