CoDel AQM discussions
 help / color / mirror / Atom feed
From: Kathleen Nichols <nichols@pollere.com>
To: codel@lists.bufferbloat.net
Subject: Re: [Codel] codel "oversteer"
Date: Wed, 20 Jun 2012 13:14:08 -0700	[thread overview]
Message-ID: <4FE22F10.3060002@pollere.com> (raw)
In-Reply-To: <4FE1F1C0.6030808@freedesktop.org>


Good traffic mixing seems to be the answer to ack compression and
fqcodel should provide that. I've just started to rerun the reverse
traffic scenarios I ran before we wrote the paper, now using (a slight
variant of Eric's) fqcodel and it looks better at controlling the
queue. It's not that ack compression foils codel; it still works but
not as well. Ack compression foils tcp.

	Kathie

On 6/20/12 8:52 AM, Jim Gettys wrote:
> On 06/20/2012 06:08 AM, Jonathan Morton wrote:
>> Is the cwnd also oscillating wildly or is it just an artefact of
>> the visible part of the queue only being a fraction of the real
>> queue?
>> 
>> Are ACK packets being aggregated by wireless? That would be a good
>> explanation for large bursts that flood the buffer, if the rwnd
>> opens a lot suddenly. This would also be an argument that 2*n is
>> too small for the ECN drop threshold.
> 
> Yeah, I've been worrying about ack compression...  Not sure exactly
> what we should be doing about it, as I don't fully understand it. -
> Jim
> 
>> 
>> The key to knowledge is not to rely on others to teach you it.
>> 
>> On 20 Jun 2012, at 04:32, Dave Taht <dave.taht@gmail.com> wrote:
>> 
>>> I've been forming a theory regarding codel behavior in some 
>>> pathological conditions. For the sake of developing the theory
>>> I'm going to return to the original car analogy published here,
>>> and add a new one - "oversteer".
>>> 
>>> Briefly:
>>> 
>>> If the underlying interface device driver is overbuffered, when
>>> the packet backlog finally makes it into the qdisc layer, that
>>> bursts up rapidly and codel rapidly ramps up it's drop strategy,
>>> which corrects the problem, but we are back in a state where we
>>> are, as in the case of an auto on ice, or a very loose connection
>>> to the steering wheel, "oversteering" because codel is actually
>>> not measuring the entire time-width of the queue and unable to
>>> control it well, even if it could.
>>> 
>>> What I observe on wireless now with fq_codel under heavy load is 
>>> oscillation in the qdisc layer between 0 length queue and 70 or
>>> more packets backlogged, a burst of drops when that happens, and
>>> far more drops than ecn marks that I expected  (with the new
>>> (arbitrary) drop ecn packets if > 2 * target idea I was fiddling
>>> with illustrating the point better, now). It's difficult to gain
>>> further direct insight without time and packet traces, and maybe
>>> exporting more data to userspace, but this kind of explains a
>>> report I got privately on x86 (no ecn drop enabled), and the
>>> behavior of fq_codel on wireless on the present version of
>>> cerowrt.
>>> 
>>> (I could always have inserted a bug, too, if it wasn't for the
>>> private report and having to get on a plane shortly I wouldn't be
>>> posting this now)
>>> 
>>> Further testing ideas (others!) could try would be:
>>> 
>>> Increase BQL's setting to over-large values on a BQL enabled
>>> interface and see what happens Test with an overbuffered ethernet
>>> interface in the first place Improve the ns3 model to have an
>>> emulated network interface with user-settable buffering
>>> 
>>> Assuming I'm right and others can reproduce this, this implies
>>> that focusing much harder on BQL and overbuffering related issues
>>> on the dozens? hundreds? of non-BQL enabled ethernet drivers is
>>> needed at this point. And we already know that much more hard
>>> work on fixing wifi is needed.
>>> 
>>> Despite this I'm generally pleased with the fq_codel results
>>> over wireless I'm currently getting from today's build of
>>> cerowrt, and certainly the BQL-enabled ethernet drivers I've
>>> worked with (ar71xx, e1000) don't display this behavior, neither
>>> does soft rate limiting using htb - instead achieving a steady
>>> state for the packet backlog, accepting bursts, and otherwise
>>> being "nice".
>>> 
>>> -- Dave Täht SKYPE: davetaht http://ronsravings.blogspot.com/ 
>>> _______________________________________________ Codel mailing
>>> list Codel@lists.bufferbloat.net 
>>> https://lists.bufferbloat.net/listinfo/codel
>> _______________________________________________ Codel mailing list 
>> Codel@lists.bufferbloat.net 
>> https://lists.bufferbloat.net/listinfo/codel
> 
> _______________________________________________ Codel mailing list 
> Codel@lists.bufferbloat.net 
> https://lists.bufferbloat.net/listinfo/codel
> 


  parent reply	other threads:[~2012-06-20 20:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-20  1:32 Dave Taht
2012-06-20  3:01 ` [Codel] [Cerowrt-devel] " dpreed
2012-06-20 10:08 ` [Codel] " Jonathan Morton
2012-06-20 15:52   ` Jim Gettys
2012-06-20 19:03     ` [Codel] [Cerowrt-devel] " dpreed
2012-06-20 20:14     ` Kathleen Nichols [this message]
2012-06-20 20:07 ` [Codel] " Kathleen Nichols

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/codel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FE22F10.3060002@pollere.com \
    --to=nichols@pollere.com \
    --cc=codel@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox