From: Curtis Villamizar <curtis@ipv6.occnc.com>
To: Naeem Khademi <naeem.khademi@gmail.com>
Cc: end2end-interest@postel.org, "aqm@ietf.org" <aqm@ietf.org>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [aqm] What is a good burst? -- AQM evaluation guidelines
Date: Sun, 15 Dec 2013 13:56:56 -0500 [thread overview]
Message-ID: <201312151857.rBFIuuea043478@gateway0.ipv6.occnc.com> (raw)
In-Reply-To: Your message of "Sun, 15 Dec 2013 06:35:04 +0100." <CAEjQQ5XPiqL9ywD3zXGKvUb_FoWmgJ3Zq_g_b8ONfHieHaRWzg@mail.gmail.com>
In message <CAEjQQ5XPiqL9ywD3zXGKvUb_FoWmgJ3Zq_g_b8ONfHieHaRWzg@mail.gmail.com>
Naeem Khademi writes:
> Hi all
>
> I'm not sure if this has already been covered in any of the other
> threads, but looking at
> http://www.ietf.org/proceedings/88/slides/slides-88-aqm-5.pdfand
> draft-ietf-aqm-recommendation-00, the question remains: "what is a
> good burst (size) that AQMs should allow?" and/or "how an AQM can have
> a notion of the right burst size?".
>
> and how "naturally-occuring bursts" mentioned in
> draft-ietf-aqm-recommendation-00 can be defined?
>
> Regards,
> Naeem
It is probably best not to try to define "naturally-occuring bursts"
since these are dependent on the type of traffic on the Internet, or
the target network. This will vary with the type of target network
and will vary as services evolve on the Internet. Therefore it may
not be definable beyond the words that make up this term and it may be
a disservice to try to define it.
If the draft is to attempt to define a target for burst tolerance (not
implying that leaky bucket is used even though it shares that term),
then the definition should be in terms of end results and it should
not be specific to any particular type of service. For all services,
and perhaps most important is fairness among flows. Beyond that the
criteria for "good end results" differs by traffic type. For example,
for non-interactive bulk transfer high goodput and fairness are
desireable. On the other side of the continuum, For interactive
bounded delay is important, though throughput is still important as is
fairness.
<slightly-off-topic>
The whole fairness thing is a sticky point and goes beyond AQM alone.
Today fairness on the Internet and most networks relies on the good
behavior of end-system protocols such as TCP. There are
hyper-aggressive TCP variants and also real time applications that
don't reduce load when loss is detected. In order for there to be
enforceable fairness something like SFQ or some variant such as
cascaded SFQ is needed to better isolate flows. SFQ just breaks the
queue up into groups of flows and many unlucky ones will end up in a
queue with a flow behaving badly. In cascaded SFQ if any specific
queue SFQ is growing, then that queue is broken down further. Depth
and/or total number of queue in cascaded SFQ is generally bounded but
the end result is to very well isolated poorly behaving flows with far
less queues than would be needed for one queue per flow.
No one AFAIK has tried to allow flows to pick a type of queue (small
queue vs deep). For SFQ or cascaded SFQ it might be best if before
loss occurs high variation in delay and/or growing delay causes the
delay sensitive application to back off. This way if fairness is
acheived, "bounded delay" may be better acheived without forcing a
small queue. This may also be related but slightly off topic.
</slightly-off-topic>
Back to AQM. Some forms of AQM do better at spcific criteria or do
better at finding a good tradeoff among the commonly cited criteria:
fairness, bulk transfer goodput, bounded delay. Where the tradeoffs
should be set is maybe not a good thing for the AQM WG to try to
define as the optimal point will depend on perspective - on what mix
of services are being used and on which services are of greater
importance to a specific individual or organization.
So briefly, my answer is: as a WG, I don't think we want to go there.
If we do go there at all, then we should define "good AQM" in terms of
acheving a "good" tradeoff between fairness, bulk transfer goodput,
and bounded delay. IMHO sometimes vague is better.
Curtis
next prev parent reply other threads:[~2013-12-15 18:57 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-15 5:35 [Bloat] " Naeem Khademi
2013-12-15 12:26 ` Jonathan Morton
2013-12-15 15:16 ` Scharf, Michael (Michael)
[not found] ` <655C07320163294895BBADA28372AF5D14C5DF@FR712WXCHMBA15.zeu. alcatel-lucent.com>
2013-12-15 20:56 ` Bob Briscoe
2013-12-15 18:56 ` Curtis Villamizar [this message]
2014-01-02 6:31 ` [Bloat] [aqm] " Fred Baker (fred)
2014-01-03 18:17 ` [Bloat] [e2e] " dpreed
2014-01-30 19:27 ` [Bloat] [aqm] [e2e] " Dave Taht
2013-12-15 21:42 ` [Bloat] [aqm] " Fred Baker (fred)
2013-12-15 22:57 ` [Bloat] [e2e] " Bob Briscoe
2013-12-16 7:34 ` Fred Baker (fred)
2013-12-16 13:47 ` Naeem Khademi
2013-12-16 14:05 ` Naeem Khademi
2013-12-16 17:30 ` Fred Baker (fred)
2013-12-16 14:28 ` Jonathan Morton
2013-12-16 14:50 ` Steinar H. Gunderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201312151857.rBFIuuea043478@gateway0.ipv6.occnc.com \
--to=curtis@ipv6.occnc.com \
--cc=aqm@ietf.org \
--cc=bloat@lists.bufferbloat.net \
--cc=end2end-interest@postel.org \
--cc=naeem.khademi@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox