[Bloat] [aqm] What is a good burst? -- AQM evaluation guidelines
curtis at ipv6.occnc.com
Sun Dec 15 13:56:56 EST 2013
In message <CAEjQQ5XPiqL9ywD3zXGKvUb_FoWmgJ3Zq_g_b8ONfHieHaRWzg at mail.gmail.com>
Naeem Khademi writes:
> Hi all
> I'm not sure if this has already been covered in any of the other
> threads, but looking at
> draft-ietf-aqm-recommendation-00, the question remains: "what is a
> good burst (size) that AQMs should allow?" and/or "how an AQM can have
> a notion of the right burst size?".
> and how "naturally-occuring bursts" mentioned in
> draft-ietf-aqm-recommendation-00 can be defined?
It is probably best not to try to define "naturally-occuring bursts"
since these are dependent on the type of traffic on the Internet, or
the target network. This will vary with the type of target network
and will vary as services evolve on the Internet. Therefore it may
not be definable beyond the words that make up this term and it may be
a disservice to try to define it.
If the draft is to attempt to define a target for burst tolerance (not
implying that leaky bucket is used even though it shares that term),
then the definition should be in terms of end results and it should
not be specific to any particular type of service. For all services,
and perhaps most important is fairness among flows. Beyond that the
criteria for "good end results" differs by traffic type. For example,
for non-interactive bulk transfer high goodput and fairness are
desireable. On the other side of the continuum, For interactive
bounded delay is important, though throughput is still important as is
The whole fairness thing is a sticky point and goes beyond AQM alone.
Today fairness on the Internet and most networks relies on the good
behavior of end-system protocols such as TCP. There are
hyper-aggressive TCP variants and also real time applications that
don't reduce load when loss is detected. In order for there to be
enforceable fairness something like SFQ or some variant such as
cascaded SFQ is needed to better isolate flows. SFQ just breaks the
queue up into groups of flows and many unlucky ones will end up in a
queue with a flow behaving badly. In cascaded SFQ if any specific
queue SFQ is growing, then that queue is broken down further. Depth
and/or total number of queue in cascaded SFQ is generally bounded but
the end result is to very well isolated poorly behaving flows with far
less queues than would be needed for one queue per flow.
No one AFAIK has tried to allow flows to pick a type of queue (small
queue vs deep). For SFQ or cascaded SFQ it might be best if before
loss occurs high variation in delay and/or growing delay causes the
delay sensitive application to back off. This way if fairness is
acheived, "bounded delay" may be better acheived without forcing a
small queue. This may also be related but slightly off topic.
Back to AQM. Some forms of AQM do better at spcific criteria or do
better at finding a good tradeoff among the commonly cited criteria:
fairness, bulk transfer goodput, bounded delay. Where the tradeoffs
should be set is maybe not a good thing for the AQM WG to try to
define as the optimal point will depend on perspective - on what mix
of services are being used and on which services are of greater
importance to a specific individual or organization.
So briefly, my answer is: as a WG, I don't think we want to go there.
If we do go there at all, then we should define "good AQM" in terms of
acheving a "good" tradeoff between fairness, bulk transfer goodput,
and bounded delay. IMHO sometimes vague is better.
More information about the Bloat