CoDel AQM discussions
 help / color / mirror / Atom feed
From: Dave Taht <dave.taht@gmail.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: codel@lists.bufferbloat.net, cerowrt-devel@lists.bufferbloat.net,
	Felix Fietkau <nbd@nbd.name>
Subject: Re: [Codel] coping with memory limitations and packet flooding in codel and fq_codel
Date: Mon, 27 Aug 2012 08:12:47 -0700	[thread overview]
Message-ID: <CAA93jw7Vw9pgSf7VBT3WMip0AYnzf7Kek-6qdrP=Pkvgm0KOHw@mail.gmail.com> (raw)
In-Reply-To: <1346067425.2420.167.camel@edumazet-glaptop>

On Mon, Aug 27, 2012 at 4:37 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Sun, 2012-08-26 at 14:36 -0700, Dave Taht wrote:
>
>> From looking over the history of this idea, it does seem to be a good
>> idea for small devices with potentially big queues.
>>
>> http://www.spinics.net/lists/netdev/msg176967.html
>>
>> That said I do tend to agree with davem's summary in fixing the
>> wifi drivers in the first place. The current allocation in the ath9k driver
>> doesn't make any sense in the first place, which (to me) implies magic
>> lies underneath that I'm reluctant to fiddle with, without deep knowledge
>> of how the ath9k driver behaves with wep/wpa/ampdus, etc.
>>
>
> Problem is some hardware cannot do this in a smart way, without paying
> the price of a sometime expensive copy.
>
> Thats why I refined this idea to actually trigger only if current
> memory needs are above a threshold.

Concur. However, rather than implement truesize +/- accounting in the
qdisc, which seemed potentially hairy, I just did a:

static int codel_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch)
{
        struct codel_sched_data *q = qdisc_priv(sch);
        int qlen;
        if (likely((qlen = qdisc_qlen(sch)) < sch->limit)) {
                if (qlen > sch->limit / 8)
                        skb = skb_reduce_truesize(skb);

As I am also typically lowering the fq_codel limit to 1024 or 1200
from the default of 10k, this will kick in at a pretty appropriate
time, there too,
after I add the code...

And that said this stuff is totally unneeded on bigger iron. Last year
we were safely stuffing 10s and 100s of MB into the tx queue rings
without issues... ( :)

> If packets are received and immediately consumed, without potentially
> staying a long time in a queue, it doesnt really matter they use 200 or
> 400% more ram than the rightly sized packets.

+1
>
>
>



-- 
Dave Täht
http://www.bufferbloat.net/projects/cerowrt/wiki - "3.3.8-17 is out
with fq_codel!"

      reply	other threads:[~2012-08-27 15:12 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-20 21:05 Dave Taht
2012-08-21  8:38 ` Eric Dumazet
2012-08-21 10:27   ` Andrew McGregor
2012-08-21 10:50     ` Eric Dumazet
2012-08-26 21:36   ` Dave Taht
2012-08-27 11:37     ` Eric Dumazet
2012-08-27 15:12       ` Dave Taht [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/codel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA93jw7Vw9pgSf7VBT3WMip0AYnzf7Kek-6qdrP=Pkvgm0KOHw@mail.gmail.com' \
    --to=dave.taht@gmail.com \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=codel@lists.bufferbloat.net \
    --cc=eric.dumazet@gmail.com \
    --cc=nbd@nbd.name \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox