[Cake] Hard limit codel: more discussion

Fengyu Gao feng32tc at gmail.com
Wed Sep 30 03:16:24 EDT 2015

Hi, I'm the author of that paper. I saw the discussions today:


Now I'd like to say something though it may seem to be a stupid idea -_-

Kathleen Nichols <nich... at pollere.com> writes:

> You are taking this much too seriously. This was written in order to
> write a paper.

Yes, I wrote that paper so that I can graduate from university. I need 12 points
and that paper is worth 5. That conference is far behind top ones like SIGCOMM,
INFOCOM and MOBICOM, but what I want is getting points more quickly.

I apologize since the paper is poorly written and there may be misunderstandings
about the original algorithm, but I do not feel shamed of myself since it is
about how to survive.

Rich Brown <richb.hano... at gmail.com> writes:

> Please don't fisk this. The paper is *way* too long to be worth a
> sentence-by-sentence refutation of every inaccuracy or outright
> wrong-headed understanding of Codel... :-)

I have read the paper (Controlling Queue Delay) more than twice, but now
I doubt whether I really understand it.

As I understand, the original CoDel algorithm uses a large buffer, and it does
not care about delay spikes even though it's above 500ms, as long as it's good
flow (which means that delay arises in a moment, and then it's gone forever)

And as I understand, the second point of CoDel is that it supresses bad flows
even if it just brings 50ms delay.

This strategy is acceptable, but may not satisfy everyone. What if some (home)
user just simply want low latency all the time at home?

Configuring a fifo queue with small buffer (e.g., bfifo 100KB) works,
and of course throughout will be affected.

Configuring a CoDel queue with small buffer (e.g., 100KB) is better.
Think about it. In this case we don't let large burst pass immediately:
good flows will be supressed, which is same as bfifo 100KB.
However, if there's chance that some bad flow lies in the 100KB buffer,
it is also supressed. That's why I think it's better than bfifo.

But wait, it's stranged that we just cannot set a "100KB buffer" with the
the current CoDel implementation - it's based on number of packets.

That's why I wrote the patch. I think it's simple but useful (as least for
some users).

Kathleen Nichols <nich... at pollere.com> writes:

> Gee, I thought the code was copyrighted.

This patch is neither copyrighted not patent protected. Its license on
Google Code has always been GPLv3. A conference paper published does
not affect it.

On 16 Apr, 2015, at 14:50, Toke Høiland-Jørgensen <t... at toke.dk> wrote:

> Surely, 4Mbps is enough for everybody?

A typical 720p (100 min) film encoded in h264/aac have a size of around 4GB,
so the bitrate is about 5.3 Mbps. And now h265 is coming...

What I mean is that 4Mbps is enough for 720 video (if RTT is 500ms with
single-thread tcp transfer). Of course it cannot support 1080p or 4k.
For high rtt environments, a simple way to work around is to use multi-threaded

Also, the tcp implementions on today's OS are somewhat different from
new-reno used in ns-2 simulator. The performance of hlc in real-world should
be further tested.

On 4/16/15 5:00 AM, Jonathan Morton wrote:
> But in general AQM can’t be used to solve that problem without also
> suffering poor throughput; combining AQM with FQ *does* solve it.
> Just like FQ is unfair to single flows competing against a swarm, but
> classifying the swarm traffic into a separate traffic class fixes
> that problem too.
> Which of course is why cake uses AQM, FQ *and* Diffserv, all at
> once.
> The linked paper didn’t measure HLC against fq_codel, even though
> they mention fq_codel.  That’s a major shortcoming.

I think it's clear why I did not compare fq_codel with hlc. If I did, I should
compare it with fq_hlc, not hlc.

The patch is very simple. so it's also easy to write a fq_hlc patch. I did not
compare fq_codel with fq_hlc because I do not want to study how fair queuing
improves latency, which is a factor not related to that paper.

For outgoing traffic in home networks (which is the focus of that paper),
I think that the sfq implemented decades ago can solve most problems if it's
furthur tuned.

I remember that sfq can hold up to 128 packets and 128 flows. The size is too
small for today's embedded deivces (64+MB ram) and access links (up to
100 Mbps).

However, the size was never increased probably for compability. When there
are lots of bulk uploads and limited latency-sensitive packets, fair queuing
can guarantee that flows with little traffic can be processed in time.

It fails when there are hundreds of upload sessions and each session is very
slow, which is not a common senario. Most of the time there are only several
upload sessions using more than 80% of the outgoing bandwidth. In this case,
simple (tuned) sfq works fine.

Then one day I realized that I should focus more on incoming traffic. In this
case, traffic is received at the home gateway after passed the bottleneck.
Things are different from traditional AQM (applied right at the bottleneck).
For downtream QoS, the major problem is that we have no access to ISPs'
devices. What we can do is simply to drop some packets.

This is about another paper, written with much more effort.

More information about the Cake mailing list