From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-01-ewr.dyndns.com (mxout-136-ewr.mailhop.org [216.146.33.136]) by lists.bufferbloat.net (Postfix) with ESMTP id 6AACF2E03C1 for ; Sat, 19 Feb 2011 16:37:02 -0800 (PST) Received: from scan-01-ewr.mailhop.org (scanner [10.0.141.223]) by mail-01-ewr.dyndns.com (Postfix) with ESMTP id 6707A1F6127 for ; Sun, 20 Feb 2011 00:37:02 +0000 (UTC) X-Spam-Score: -1.0 (-) X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 209.85.214.43 Received: from mail-bw0-f43.google.com (mail-bw0-f43.google.com [209.85.214.43]) by mail-01-ewr.dyndns.com (Postfix) with ESMTP id D938F1F584B for ; Sun, 20 Feb 2011 00:37:01 +0000 (UTC) Received: by bwz14 with SMTP id 14so243006bwz.16 for ; Sat, 19 Feb 2011 16:37:01 -0800 (PST) MIME-Version: 1.0 Received: by 10.204.53.148 with SMTP id m20mr2118362bkg.150.1298162221046; Sat, 19 Feb 2011 16:37:01 -0800 (PST) Sender: njs@vorpus.org Received: by 10.204.54.4 with HTTP; Sat, 19 Feb 2011 16:37:00 -0800 (PST) In-Reply-To: <1298064074-8108-1-git-send-email-linville@tuxdriver.com> References: <1297907356-3214-1-git-send-email-linville@tuxdriver.com> <1298064074-8108-1-git-send-email-linville@tuxdriver.com> Date: Sat, 19 Feb 2011 16:37:00 -0800 X-Google-Sender-Auth: s4PB343lRyZkQWgsvJuMIOG8YB0 Message-ID: Subject: Re: [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat From: Nathaniel Smith To: "John W. Linville" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: bloat-devel@lists.bufferbloat.net, johannes@sipsolutions.net, linux-wireless@vger.kernel.org X-BeenThere: bloat-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Developers working on AQM, device drivers, and networking stacks" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 20 Feb 2011 00:37:03 -0000 Actually, a few more comments just occurred to me... On Fri, Feb 18, 2011 at 1:21 PM, John W. Linville wrote: > Johannes' comment about tx status reporting being unreliable (and what > he was really saying) finally sunk-in. =C2=A0So, this version uses > skb->destructor to track in-flight fragments. =C2=A0That should handle > fragments that get silently dropped in the driver for whatever reason > without leaking queue capacity. =C2=A0Correct me if I'm wrong! Should we be somehow filtering out and ignoring the packets that get dropped, when we're calculating the average packet transmission rate? Presumably they're getting dropped almost instantly, so they don't really take up queue space and they have abnormally fast transmission times, which will tend to cause us to overestimate max_enqueued? They should be rare, though, at least. (And presumably we *should* include packets that get dropped because their retry timer ran out, since they were sitting in the queue for that long.) Possibly we should just ignore any packet that was handled in less than, oh, say, a few microseconds? Alternatively, if we aren't worried about those packets, then is there any reason this patch should be mac80211 specific? > +static void ieee80211_tx_skb_free(struct sk_buff *skb) > +{ > + =C2=A0 =C2=A0 =C2=A0 struct ieee80211_sub_if_data *sdata =3D IEEE80211_= DEV_TO_SUB_IF(skb->dev); > + =C2=A0 =C2=A0 =C2=A0 struct ieee80211_local *local =3D sdata->local; > + =C2=A0 =C2=A0 =C2=A0 int q =3D skb_get_queue_mapping(skb); > + > + =C2=A0 =C2=A0 =C2=A0 /* decrement enqueued count */ > + =C2=A0 =C2=A0 =C2=A0 atomic_dec(&sdata->qdata[q].enqueued); > + > + =C2=A0 =C2=A0 =C2=A0 /* if queue stopped, wake it */ > + =C2=A0 =C2=A0 =C2=A0 if (ieee80211_queue_stopped(&local->hw, q)) > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ieee80211_wake_queue(&= local->hw, q); > +} I think you need to check that .enqueued is < max_enqueued here, instead of waking the queue unconditionally. Suppose the data rate drops while there's a steady flow -- our max_enqueued value will drop below the current queue size, but because we wake the queue unconditionally after each packet goes out, and then only stop it again after we've queued at least one new packet, we might get 'stuck' with an over-large queue. -- Nathaniel