From: Greg White <g.white@CableLabs.com>
To: Jonathan Morton <chromatix99@gmail.com>,
sahil grover <sahilgrover013@gmail.com>
Cc: "codel@lists.bufferbloat.net" <codel@lists.bufferbloat.net>
Subject: Re: [Codel] why RED is not considered as a solution to bufferbloat.
Date: Wed, 4 Mar 2015 06:51:44 +0000 [thread overview]
Message-ID: <D11B960B.44E5F%g.white@cablelabs.com> (raw)
In-Reply-To: <CAJq5cE0OJQ-GsuB-4Cn6JEMK0ZNmOu8yLd5DzMJ2kMuMG2eACg@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 4658 bytes --]
This is a common misconception.
The "NotCodel" approach, operating entirely at enqueue time, will have to predict buffering latency for a packet when it is enqueued (likely based on queue depth and expected dequeue rate), as opposed to CoDel which measures the buffering latency for the packet after it has been experienced. If dequeue rate is predictable (true for some links, less so for others) then the NotCodel approach can be designed such that it will drop (or mark) *the same packets* that the CoDel approach would.
In some hardware, it is much more feasible to perform these operations at enqueue time rather than at dequeue time. Implementers of such systems shouldn't be dissuaded from implementing CoDel (especially if their dequeue rate is reasonably predictable).
We experimented with an implementation of "NotCodel" (we called it CoDel-DT, for drop tail) and that code is in the ns-2.36 release candidate. One caveat, the control law in that implementation does not precisely match CoDel, and so there will be some slight differences in results between the two.
-Greg
From: Jonathan Morton <chromatix99@gmail.com<mailto:chromatix99@gmail.com>>
Date: Thursday, February 26, 2015 at 6:56 AM
To: sahil grover <sahilgrover013@gmail.com<mailto:sahilgrover013@gmail.com>>
Cc: "codel@lists.bufferbloat.net<mailto:codel@lists.bufferbloat.net>" <codel@lists.bufferbloat.net<mailto:codel@lists.bufferbloat.net>>
Subject: Re: [Codel] why RED is not considered as a solution to bufferbloat.
Okay, let me walk you through this.
Let's say you are managing a fairly slow link, on the order of 1 megabit. The receiver on the far side of it is running a modern OS and has opened its receive window to a whole megabyte. It would take about 10 seconds to pass a whole receive window through the link.
This is a common situation in the real world, by the way.
Now, let's suppose that the sender was constrained to a slower send rate, say half a megabit, for reasons unrelated to the network. Maybe it's a streaming video service on a mostly static image, so it only needs to send small delta frames and compressed audio. It has therefore opened the congestion window as wide as it will go, to the limit of the receive window. But then the action starts, there's a complete scene change, and a big burst of data is sent all at once, because the congestion window permits it.
So the queue you're managing suddenly has a megabyte of data in it. Unless you do something about it, your induced latency just leapt up to ten seconds, and the VoIP call your user was on at the time will drop out.
Now, let's compare the behaviour of two AQMs: Codel and something almost, but not entirely, unlike Codel. Specifically, it does everything that Codel does, but at enqueue instead of dequeue time.
Both of them will let the entire burst into the queue. Codel only takes action if the queue remains more full than its threshold for more than 100ms. Since this burst arrives all at once, and the queue stayed nice and empty up until now, neither AQM will decide to mark or drop packets at that moment.
Now, packets are slowly delivered across the link. Each one takes about 15ms to deliver. They are answered by acks, and the sender obligingly sends fresh packets to replace them. After about six or seven packets have been delivered, both AQMs will decide to start marking packets (this is an ECN enabled flow). Let's assume that the next packet after this point will be marked. This will be the receiver's first clue that congestion is occurring.
For Codel, the next packet available to mark will be the eighth packet. So the receiver learns about the congestion event 120 ms after it began. It will echo the congestion mark back to the sender in the corresponding ack, which will immediately halve the congestion window. It will still take at least ten seconds to drain the queue.
For NotCodel, the next available packet for marking is the one a megabyte later than the eighth packet. The receiver won't see that packet until 10120 ms after the congestion event began. So the sender will happily keep the queue ten seconds long for the next ten seconds, instead of backing off straight away. It will take at least TWENTY seconds to drain the queue.
Note that even if the link bandwidth was 10 megabits rather than one, with the same receive window size, it would still take one second to drain the queue with Codel and two seconds with NotCodel.
This relatively simple traffic situation demonstrates that marking on dequeue rather than enqueue is twice as effective at managing queue length.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 6020 bytes --]
next prev parent reply other threads:[~2015-03-04 6:51 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-24 15:37 sahil grover
2015-02-24 15:54 ` Jonathan Morton
2015-02-24 16:20 ` sahil grover
2015-02-24 16:32 ` Jonathan Morton
2015-02-24 18:00 ` Dave Taht
2015-02-24 18:15 ` Dave Taht
[not found] ` <CADnS-2jBfvSzgWmio3y_hyozPYnPzgUX+bLAh0j8VB90HspBNg@mail.gmail.com>
[not found] ` <CAA93jw70w1__gkE5ooqK3eJ12mJGWUnMKMg7MR=uN6+DEr9iPg@mail.gmail.com>
2015-02-26 12:58 ` sahil grover
2015-02-26 13:56 ` Jonathan Morton
2015-02-27 14:34 ` sahil grover
2015-02-27 15:25 ` Richard Scheffenegger
2015-03-04 6:51 ` Greg White [this message]
2015-03-04 7:15 ` Jonathan Morton
2015-02-24 22:40 ` Kathleen Nichols
2015-02-24 16:33 ` Richard Scheffenegger
2015-02-24 17:29 ` sahil grover
2015-02-24 17:35 ` Jonathan Morton
2015-02-24 16:27 ` Wesley Eddy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/codel.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D11B960B.44E5F%g.white@cablelabs.com \
--to=g.white@cablelabs.com \
--cc=chromatix99@gmail.com \
--cc=codel@lists.bufferbloat.net \
--cc=sahilgrover013@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox