CoDel AQM discussions
 help / color / mirror / Atom feed
From: Jesper Louis Andersen <jesper.louis.andersen@gmail.com>
To: codel@lists.bufferbloat.net
Subject: [Codel] Another use of CoDel in an Erlang queue system.
Date: Fri, 28 Dec 2012 15:36:04 +0100	[thread overview]
Message-ID: <CAGrdgiW7ZR_H3Y+RWkQ13Ciab=4f7RazYeM+uvBTCTwSLoWfjA@mail.gmail.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 2375 bytes --]

Hi CoDel list.

I have implemented the CoDel algorithm in Erlang for a subsystem I am
writing. It is one of the "abuse one algorithm in another setting" cases.
Here is the implementation, all 200 lines including comments:

https://github.com/jlouis/safetyvalve/blob/master/src/sv_codel.erl

One advantage of Erlang is that a lot of the if-maze in the ns2 codel patch
can be reduced to a single pattern match here.

The "safetyvalve" application has a queueing purpose in an Erlang
application. Requests arrive at the Erlang system in a rather controlled
fashion of Poissonesque proportions (but it isn't Poisson in practice
though). In Erlang, we spawn a new (ultra-light-weight) process per
incoming request. The problem is that we have limited resources in the
system
and we want to protect base service. As soon as a small process begin doing
work, it may use a lot of resources, especially memory. We may know the
system can at most handle 800 simultaneous requests of a given type. Or
that the total amount of connections can't go above 5000. You could just
stop providing service when the service ceiling was hit, but this would not
allow sudden "spikes" of traffic. This is exactly why we need a queue of
incoming requests to "smooth" out request spikes and handle them over time.

The problem, however, is that the queue I use is a tail-drop queue. And I
have no idea of the sojourn time through the queue. My fear is that the
tail-drop behavior will just incur more latency on the tasks that do get
through the queue. Hence the attempt at using CoDel for this.

Preliminary tests show that it does look like it performs the task it
should. When the Target is not met for Interval, we begin dropping tasks
out of the queue, like we should. More experiments are needed however,
before I know if this works well. One thing which is an important
difference is that there are no TCP-behaviour where the window will close
down upon dropped packets. But CoDel *does* allow me to provide early
feedback that the Target can't really be met. For real-time requests, this
is actualy the desired behavior.

The next plan is to provide a TCP-like window feature for batch processing
of jobs. The idea is that you can have a large list of jobs that needs to
be run and have the system be self-tuning. TCP congestion control works, so
why not apply it in a new setting :)


-- 
J.

[-- Attachment #2: Type: text/html, Size: 2792 bytes --]

                 reply	other threads:[~2012-12-28 14:36 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/codel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGrdgiW7ZR_H3Y+RWkQ13Ciab=4f7RazYeM+uvBTCTwSLoWfjA@mail.gmail.com' \
    --to=jesper.louis.andersen@gmail.com \
    --cc=codel@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox