Historic archive of defunct list bloat-devel@lists.bufferbloat.net
 help / color / mirror / Atom feed
From: Nathaniel Smith <njs@pobox.com>
To: "John W. Linville" <linville@tuxdriver.com>
Cc: bloat-devel@lists.bufferbloat.net, johannes@sipsolutions.net,
	linux-wireless@vger.kernel.org
Subject: Re: [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat
Date: Mon, 21 Feb 2011 15:26:32 -0800	[thread overview]
Message-ID: <AANLkTi=LnpFLzKbtaPxSjg9TRU_pY2oRi6tsFg5GkaGK@mail.gmail.com> (raw)
In-Reply-To: <20110221184716.GD9650@tuxdriver.com>

On Mon, Feb 21, 2011 at 10:47 AM, John W. Linville
<linville@tuxdriver.com> wrote:
> On Fri, Feb 18, 2011 at 07:44:30PM -0800, Nathaniel Smith wrote:
>> On Fri, Feb 18, 2011 at 1:21 PM, John W. Linville
>> <linville@tuxdriver.com> wrote:
>> > +       /* grab timestamp info for buffer control estimates */
>> > +       tserv = ktime_sub(ktime_get(), skb->tstamp);
>> [...]
>> > +               ewma_add(&sta->sdata->qdata[q].tserv_ns_avg,
>> > +                        ktime_to_ns(tserv));
>>
>> I think you're still measuring how long it takes one packet to get
>> from the end of the queue to the beginning, rather than measuring how
>> long it takes each packet to go out?
>
> Yes, I am measuring how long the driver+device takes to release each
> skb back to me (using that as a proxy for how long it takes to get
> the fragment to the next hop).  Actually, FWIW I'm only measuring
> that time for those skb's that result in a tx status report.
>
> I tried to see how your measurement would be useful, but I just don't
> see how the number of frames ahead of me in the queue is relevant to
> the measured link latency?  I mean, I realize that having more packets
> ahead of me in the queue is likely to increase the latency for this
> frame, but I don't understand why I should use that information to
> discount the measured latency...?

It depends on which latency you want to measure. The way that I
reasoned was, suppose that at some given time, the card is able to
transmit 1 fragment every T nanoseconds. Then it can transmit n
fragments in n*T nanoseconds, so if we want the queue depth to be 2
ms, then we have
  n * T = 2 * NSEC_PER_MSEC
  n = 2 * NSEC_PER_MSEC / T

Which is the calculation that you're doing:

+                       sta->sdata->qdata[q].max_enqueued =
+                               max_t(int, 2, 2 * NSEC_PER_MSEC / tserv_ns_avg);

But for this calculation to make sense, we need T to be the time it
takes the card to transmit 1 fragment. In your patch, you're not
measuring that. You're measuring the total time between when a packet
is enqueued and when it is transmitted; if there were K packets in the
queue ahead of it, then this is the time to send *all* of them --
you're measuring (K+1)*T. That's why in my patch, I recorded the
current size of the queue when each packet is enqueued, so I could
compute T = total_time / (K+1).

Under saturation conditions, K+1 will always equal max_enqueued, so I
guess in your algorithm, at the steady state we have

  max_enqueued = K+1 = 2 * NSEC_PER_MSEC / ((K+1) * T)
  (K+1)^2 = 2 * NSEC_PER_MSEC / T
  K+1 = sqrt(2 * NSEC_PER_MSEC / T)

So I think under saturation, you converge to setting the queue to the
square root of the appropriate size?

-- Nathaniel

  reply	other threads:[~2011-02-21 23:26 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1297619803-2832-1-git-send-email-njs@pobox.com>
2011-02-17  1:49 ` [RFC] " John W. Linville
2011-02-17  3:31   ` Ben Greear
2011-02-17  4:26   ` Nathaniel Smith
2011-02-17  8:31   ` Johannes Berg
2011-02-18 21:21   ` [RFC v2] " John W. Linville
2011-02-19  3:44     ` Nathaniel Smith
2011-02-21 18:47       ` John W. Linville
2011-02-21 23:26         ` Nathaniel Smith [this message]
2011-02-23 22:28           ` John W. Linville
2011-02-25 18:21             ` Nathaniel Smith
2011-02-25 18:27               ` Nathaniel Smith
2011-02-20  0:37     ` Nathaniel Smith
2011-02-20  0:51       ` Jim Gettys
2011-02-20 15:24         ` Dave Täht
2011-02-21 18:52       ` John W. Linville
2011-02-21 15:28     ` Johannes Berg
2011-02-21 16:12       ` Jim Gettys
2011-02-21 19:15         ` John W. Linville
2011-02-21 19:06       ` John W. Linville
2011-02-21 19:29         ` [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat - AQM on hosts Jim Gettys
2011-02-21 20:26         ` [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat Tianji Li
2011-02-28 13:07         ` Johannes Berg
     [not found] <x1-oTZGm1A7eclvABnv1aK0z1Nc7iI@gwene.org>
2011-02-20  1:59 ` Dave Täht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='AANLkTi=LnpFLzKbtaPxSjg9TRU_pY2oRi6tsFg5GkaGK@mail.gmail.com' \
    --to=njs@pobox.com \
    --cc=bloat-devel@lists.bufferbloat.net \
    --cc=johannes@sipsolutions.net \
    --cc=linux-wireless@vger.kernel.org \
    --cc=linville@tuxdriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox