From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-11-iad.dyndns.com (mxout-071-iad.mailhop.org [216.146.32.71]) by lists.bufferbloat.net (Postfix) with ESMTP id 9711E2E0050 for ; Mon, 21 Feb 2011 15:26:35 -0800 (PST) Received: from scan-11-iad.mailhop.org (scan-11-iad.local [10.150.0.208]) by mail-11-iad.dyndns.com (Postfix) with ESMTP id 776E2171D72 for ; Mon, 21 Feb 2011 23:26:34 +0000 (UTC) X-Spam-Score: -1.0 (-) X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 209.85.214.51 Received: from mail-bw0-f51.google.com (mail-bw0-f51.google.com [209.85.214.51]) by mail-11-iad.dyndns.com (Postfix) with ESMTP id 1015A171CDD for ; Mon, 21 Feb 2011 23:26:33 +0000 (UTC) Received: by bwz10 with SMTP id 10so2641146bwz.10 for ; Mon, 21 Feb 2011 15:26:33 -0800 (PST) MIME-Version: 1.0 Received: by 10.204.33.70 with SMTP id g6mr1844393bkd.177.1298330793050; Mon, 21 Feb 2011 15:26:33 -0800 (PST) Sender: njs@vorpus.org Received: by 10.204.54.4 with HTTP; Mon, 21 Feb 2011 15:26:32 -0800 (PST) In-Reply-To: <20110221184716.GD9650@tuxdriver.com> References: <1297907356-3214-1-git-send-email-linville@tuxdriver.com> <1298064074-8108-1-git-send-email-linville@tuxdriver.com> <20110221184716.GD9650@tuxdriver.com> Date: Mon, 21 Feb 2011 15:26:32 -0800 X-Google-Sender-Auth: LcgoKWEVUg7PqGuqz8w5N79-Ypg Message-ID: Subject: Re: [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat From: Nathaniel Smith To: "John W. Linville" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: bloat-devel@lists.bufferbloat.net, johannes@sipsolutions.net, linux-wireless@vger.kernel.org X-BeenThere: bloat-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Developers working on AQM, device drivers, and networking stacks" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 21 Feb 2011 23:26:36 -0000 On Mon, Feb 21, 2011 at 10:47 AM, John W. Linville wrote: > On Fri, Feb 18, 2011 at 07:44:30PM -0800, Nathaniel Smith wrote: >> On Fri, Feb 18, 2011 at 1:21 PM, John W. Linville >> wrote: >> > + =C2=A0 =C2=A0 =C2=A0 /* grab timestamp info for buffer control estim= ates */ >> > + =C2=A0 =C2=A0 =C2=A0 tserv =3D ktime_sub(ktime_get(), skb->tstamp); >> [...] >> > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ewma_add(&sta->sdat= a->qdata[q].tserv_ns_avg, >> > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0ktime_to_ns(tserv)); >> >> I think you're still measuring how long it takes one packet to get >> from the end of the queue to the beginning, rather than measuring how >> long it takes each packet to go out? > > Yes, I am measuring how long the driver+device takes to release each > skb back to me (using that as a proxy for how long it takes to get > the fragment to the next hop). =C2=A0Actually, FWIW I'm only measuring > that time for those skb's that result in a tx status report. > > I tried to see how your measurement would be useful, but I just don't > see how the number of frames ahead of me in the queue is relevant to > the measured link latency? =C2=A0I mean, I realize that having more packe= ts > ahead of me in the queue is likely to increase the latency for this > frame, but I don't understand why I should use that information to > discount the measured latency...? It depends on which latency you want to measure. The way that I reasoned was, suppose that at some given time, the card is able to transmit 1 fragment every T nanoseconds. Then it can transmit n fragments in n*T nanoseconds, so if we want the queue depth to be 2 ms, then we have n * T =3D 2 * NSEC_PER_MSEC n =3D 2 * NSEC_PER_MSEC / T Which is the calculation that you're doing: + sta->sdata->qdata[q].max_enqueued =3D + max_t(int, 2, 2 * NSEC_PER_MSEC / tserv_ns_= avg); But for this calculation to make sense, we need T to be the time it takes the card to transmit 1 fragment. In your patch, you're not measuring that. You're measuring the total time between when a packet is enqueued and when it is transmitted; if there were K packets in the queue ahead of it, then this is the time to send *all* of them -- you're measuring (K+1)*T. That's why in my patch, I recorded the current size of the queue when each packet is enqueued, so I could compute T =3D total_time / (K+1). Under saturation conditions, K+1 will always equal max_enqueued, so I guess in your algorithm, at the steady state we have max_enqueued =3D K+1 =3D 2 * NSEC_PER_MSEC / ((K+1) * T) (K+1)^2 =3D 2 * NSEC_PER_MSEC / T K+1 =3D sqrt(2 * NSEC_PER_MSEC / T) So I think under saturation, you converge to setting the queue to the square root of the appropriate size? -- Nathaniel