From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-34-ewr.dyndns.com (mxout-188-ewr.mailhop.org [216.146.33.188]) by lists.bufferbloat.net (Postfix) with ESMTP id DD90F2E0270 for ; Wed, 23 Feb 2011 14:30:51 -0800 (PST) Received: from scan-32-ewr.mailhop.org (scan-32-ewr.local [10.0.141.238]) by mail-34-ewr.dyndns.com (Postfix) with ESMTP id DFEB4709FEF for ; Wed, 23 Feb 2011 22:30:44 +0000 (UTC) X-Spam-Score: 0.0 () X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 70.61.120.58 Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by mail-34-ewr.dyndns.com (Postfix) with ESMTP id 6B608709BE1 for ; Wed, 23 Feb 2011 22:30:39 +0000 (UTC) Received: from uucp by smtp.tuxdriver.com with local-rmail (Exim 4.63) (envelope-from ) id 1PsNEA-0002sP-B8; Wed, 23 Feb 2011 17:30:34 -0500 Received: from linville-8530p.local (localhost.localdomain [127.0.0.1]) by linville-8530p.local (8.14.4/8.14.4) with ESMTP id p1NMSjvj028936; Wed, 23 Feb 2011 17:28:45 -0500 Received: (from linville@localhost) by linville-8530p.local (8.14.4/8.14.4/Submit) id p1NMShAU028791; Wed, 23 Feb 2011 17:28:43 -0500 Date: Wed, 23 Feb 2011 17:28:43 -0500 From: "John W. Linville" To: Nathaniel Smith Subject: Re: [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat Message-ID: <20110223222842.GA20039@tuxdriver.com> References: <1297907356-3214-1-git-send-email-linville@tuxdriver.com> <1298064074-8108-1-git-send-email-linville@tuxdriver.com> <20110221184716.GD9650@tuxdriver.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Cc: bloat-devel@lists.bufferbloat.net, johannes@sipsolutions.net, linux-wireless@vger.kernel.org X-BeenThere: bloat-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Developers working on AQM, device drivers, and networking stacks" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Feb 2011 22:30:52 -0000 On Mon, Feb 21, 2011 at 03:26:32PM -0800, Nathaniel Smith wrote: > On Mon, Feb 21, 2011 at 10:47 AM, John W. Linville > wrote: > > I tried to see how your measurement would be useful, but I just don't > > see how the number of frames ahead of me in the queue is relevant to > > the measured link latency?  I mean, I realize that having more packets > > ahead of me in the queue is likely to increase the latency for this > > frame, but I don't understand why I should use that information to > > discount the measured latency...? > > It depends on which latency you want to measure. The way that I > reasoned was, suppose that at some given time, the card is able to > transmit 1 fragment every T nanoseconds. Then it can transmit n > fragments in n*T nanoseconds, so if we want the queue depth to be 2 > ms, then we have > n * T = 2 * NSEC_PER_MSEC > n = 2 * NSEC_PER_MSEC / T > > Which is the calculation that you're doing: > > + sta->sdata->qdata[q].max_enqueued = > + max_t(int, 2, 2 * NSEC_PER_MSEC / tserv_ns_avg); > > But for this calculation to make sense, we need T to be the time it > takes the card to transmit 1 fragment. In your patch, you're not > measuring that. You're measuring the total time between when a packet > is enqueued and when it is transmitted; if there were K packets in the > queue ahead of it, then this is the time to send *all* of them -- > you're measuring (K+1)*T. That's why in my patch, I recorded the > current size of the queue when each packet is enqueued, so I could > compute T = total_time / (K+1). Thanks for the math! I think I see what you are saying now. Since the measured time is being used to determine the queue size, we need to factor-in the length of the queue that resulted in that measurment. Unfortunately, I'm not sure how to apply this with the technique I am using for the timing measurements. :-( I'll have to think about this some more... John -- John W. Linville Someday the world will need a hero, and you linville@tuxdriver.com might be all we have. Be ready.