CoDel AQM discussions
 help / color / mirror / Atom feed
From: Kathleen Nichols <nichols@pollere.com>
To: Dave Taht <dave.taht@gmail.com>
Cc: Keith Winstein <keithw@mit.edu>,
	"codel@lists.bufferbloat.net" <codel@lists.bufferbloat.net>
Subject: Re: [Codel] sprout
Date: Thu, 11 Jul 2013 08:45:03 -0700	[thread overview]
Message-ID: <51DED2FF.604@pollere.com> (raw)
In-Reply-To: <CAA93jw4JPqoTfNOzTRCKu_DJUMJsHzGZVe_GAaCq3hNYyTyOZw@mail.gmail.com>


Yes. In point of fact, that is not what I think. I've pointed out the
differences a
few places. I did do a simulator version of the sfqcodel code that could be
configured closer to the fq_codel code at the request/expense of CableLabs.

Dave, not completely sure which reservation about maxpacket is in reference.

	Kathie

On 7/10/13 3:44 PM, Dave Taht wrote:
> On Wed, Jul 10, 2013 at 3:10 PM, Kathleen Nichols <nichols@pollere.com> wrote:
>> Is that indeed what I think?
> 
> Heh. On another topic, at my stanford talk, you pointed at maxpacket
> being a thing
> you were a bit dubious about. After fiddling with the concept in
> presence of offloads
> (which bloat up maxpacket to the size of a tso packet (20k or more))
> I'm more than a bit dubious about it and in my next build of ns2_codel
> and nfq_codel
>  in linux I just capped it at a mtu in the codel_should_drop function:
> 
>         if (unlikely(qdisc_pkt_len(skb) > stats->maxpacket &&
> qdisc_pkt_len(skb) < 1514 ))
>                 stats->maxpacket = qdisc_pkt_len(skb);
> 
> Perhaps in fq_codel the entire maxpacket idea can be junked?
> 
> The problem that I see is that codel switches out of a potential drop
> state here and
> at almost any workload maxpacket hits a TSO-like size, and at higher workloads
> it's too high. I think eric is working on something that will let
> overlarge packets just
> work and begin to break them down into smaller packets at higher workloads?
> 
> Also
> 
> I'd made a suggestion elsewhere that TSQ migrate down in size from 128k to
> lower as the number of active flows increased. Something like
> tcp_limit_output_size = max((2*BQL's limit)/(number of flows),mtu)
> 
> but I realize now that tcp has no idea what interface it's going out
> at any given
> time... still I'm on a quest to minimize latency and let offloads still work..
> 
> 
> 
> 


  parent reply	other threads:[~2013-07-11 15:45 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-10 19:30 Dave Taht
2013-07-10 20:19 ` Keith Winstein
2013-07-10 20:38   ` Jim Gettys
2013-07-10 20:41     ` Keith Winstein
2013-07-10 20:46       ` Jim Gettys
2013-07-10 21:42         ` Dave Taht
2013-07-10 22:10         ` Kathleen Nichols
2013-07-10 22:44           ` Dave Taht
2013-07-10 23:07             ` Eric Dumazet
2013-07-11 15:45             ` Kathleen Nichols [this message]
2013-07-11 16:22               ` Eric Dumazet
2013-07-11 16:54                 ` Kathleen Nichols
2013-07-11 17:17                   ` Dave Taht
2013-07-10 20:40   ` Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/codel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51DED2FF.604@pollere.com \
    --to=nichols@pollere.com \
    --cc=codel@lists.bufferbloat.net \
    --cc=dave.taht@gmail.com \
    --cc=keithw@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox