From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-13-iad.dyndns.com (mxout-069-iad.mailhop.org [216.146.32.69]) by lists.bufferbloat.net (Postfix) with ESMTP id D0DA52E015C for ; Fri, 11 Mar 2011 19:52:05 -0800 (PST) Received: from scan-11-iad.mailhop.org (scan-11-iad.local [10.150.0.208]) by mail-13-iad.dyndns.com (Postfix) with ESMTP id 06A24BDF36F for ; Sat, 12 Mar 2011 03:52:03 +0000 (UTC) X-Spam-Score: -1.0 (-) X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 209.85.215.43 Received: from mail-ew0-f43.google.com (mail-ew0-f43.google.com [209.85.215.43]) by mail-13-iad.dyndns.com (Postfix) with ESMTP id 91C00BDF21B for ; Sat, 12 Mar 2011 03:52:02 +0000 (UTC) Received: by ewy20 with SMTP id 20so1326875ewy.16 for ; Fri, 11 Mar 2011 19:52:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:subject:mime-version:content-type:from :in-reply-to:date:cc:content-transfer-encoding:message-id:references :to:x-mailer; bh=PPs3OuJM8T9zPPk1md8uX+zc8lxzJjbEfRJDFWDeZoA=; b=G2iUDy/y1PA/MUsoI8hIOXDIHIu2hUs2TCz1Uw0WCpSrVBj2sPW/lqoPa0SSF7f/PO hhVSV6bF4GJ7UPZlg/IHfJFYeW4e82ZMEQBW4T3edJtkLbIKIAUFfL0hW5snARK/ST1Z IMj6hC5X2Xf2u1PoGci3vxcIPATDZdHeEIbMU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=GOG3DjoNBrBSd9+mNbV9bFNUl2RvpdMjgwKLuktS28JVQ+tvTbtGQB/wMwRfbG+Bf4 p9Ewfkg2fgiQga7ci0LrmxOGfVFj343gjw3CuVADfs0pDsvI9SFSCa3BtwCdLG0qFptZ C5I0Pajv/q1nR6nw50scAUTvT2Wnu59oeOEUY= Received: by 10.213.27.6 with SMTP id g6mr128297ebc.14.1299901921799; Fri, 11 Mar 2011 19:52:01 -0800 (PST) Received: from [192.168.239.42] (xdsl-83-150-84-172.nebulazone.fi [83.150.84.172]) by mx.google.com with ESMTPS id x54sm4006525eeh.17.2011.03.11.19.52.01 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 11 Mar 2011 19:52:01 -0800 (PST) Mime-Version: 1.0 (Apple Message framework v1082) Content-Type: text/plain; charset=us-ascii From: Jonathan Morton In-Reply-To: <1299899959.1835.10.camel@amd.pacdat.net> Date: Sat, 12 Mar 2011 05:52:00 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <10491D5A-AA1B-4F41-99A9-15A0C06ADF25@gmail.com> References: <16808EAB-2F52-4D32-8A8C-2AE09CD4D103@gmail.com> <1299899959.1835.10.camel@amd.pacdat.net> To: richard X-Mailer: Apple Mail (2.1082) Cc: bloat@lists.bufferbloat.net Subject: Re: [Bloat] Measuring latency-under-load consistently X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Mar 2011 03:52:06 -0000 On 12 Mar, 2011, at 5:19 am, richard wrote: >> 3) Flow smoothness, measured as the maximum time between sequential = received data for any continuous flow, also expressed in Hz. This is an = important metric for video and radio streaming, and one which CUBIC will = probably do extremely badly at if there are large buffers in the path = (without AQM or Blackpool). >>=20 >=20 > Am I correct that your "Flow smoothness" is the inverse if jitter? We > should probably keep to a standard nomenclature. What should we call > this and/or should we call it something else or invert the concept and > call it what we already do - jitter? I'm not certain that it's the same as what you call jitter, but it could = be. Because I'm going to be measuring at the application level, I don't = necessarily get to see when every single packet arrives, particularly if = they arrive out of order. So what I'm measuring is the "lumpiness" of = the application data-flow progress, but inverted to "smoothness" (ie. = measured in Hz rather than ms) so that bigger numbers are better. Using my big-easy-numbers example, suppose you have a 30-second = unmanaged drop-tail queue, and nothing to stop it filling up. For a = while, packets will arrive in order, so the inter-arrival delay seen by = the application is at most the RTT (as during the very beginning of the = slow-start, which I think I will exclude from the measurement) and = usually less as a continuous stream builds up. But then the queue fills up and a packet is dropped. At this point, = progress as seen by the application will stop *dead* as soon as that = missing packet's position reaches the head of the queue. The sending TCP will now retransmit that packet. But the queue is still = approximately full because the congestion feedback didn't happen until = now, so it will take another 30 seconds for the data to reach the = application. At this point the progress is instantaneously very large, = and hopefully will continue more smoothly. But the maximum inter-arrival delay after that episode is now 30 seconds = (or 0.033 Hz), even though packets were arriving correctly throughout = that time. That's what I'm measuring here. Most links are much less severe than that, of course, but it's this kind = of thing that stops radio and video streaming from working properly. On the much less severe end of the scale, this will also measure the = burstiness of flows in the case when there's more than one at once - = usually you will get a bunch of packets from one flow, then a bunch from = another, and so on, but SFQ tends to fix that for you if you have it. = It will probably also pick up some similar effects from 802.11n = aggregation and other link-level congestion-avoidance techniques. - Jonathan