From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk0-x22f.google.com (mail-vk0-x22f.google.com [IPv6:2607:f8b0:400c:c05::22f]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 5FF4221F8C3 for ; Wed, 30 Sep 2015 00:16:25 -0700 (PDT) Received: by vkgd64 with SMTP id d64so20056475vkg.0 for ; Wed, 30 Sep 2015 00:16:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=Z736Qc8NRyqzw0Ak3a2CcObpJ/UuMivxkoXHJl/mOsU=; b=tpP0FmupBrybKGBrR1DQqkOT9/Y9pf+ds8898Fz04KMe01FIEvlgN3IVou7BAdHBMo yvlm1VnEQwUuYV+McQ/Dxy3DBDdobfvq1kSt06wuDWDu/qttsqAD7tQnviA/hbZ//nv6 tKDZMvEfSxqWSEIYe1U+WkC4eU34QdPrsXwTd0C8o+dC8Xt/QmFrMSq1ZkL9JYDpbo8g TSYN7hvpJjDoP0tWBy6wprxS76+wE74hGoGigmx+jpSZcXTgY/+rEtOiu89DFmwerib1 BBetdUXUjlgeNx7aobWRgwcek4yfY3z7jACCBfA+bA2BjjCFKsaPPdlo6oXYuTCFpCMw Hrvw== MIME-Version: 1.0 X-Received: by 10.31.15.69 with SMTP id 66mr1960848vkp.142.1443597384760; Wed, 30 Sep 2015 00:16:24 -0700 (PDT) Received: by 10.31.141.8 with HTTP; Wed, 30 Sep 2015 00:16:24 -0700 (PDT) Date: Wed, 30 Sep 2015 15:16:24 +0800 Message-ID: From: Fengyu Gao To: cake@lists.bufferbloat.net Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: [Cake] Hard limit codel: more discussion X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Sep 2015 07:16:48 -0000 Hi, I'm the author of that paper. I saw the discussions today: https://lists.bufferbloat.net/pipermail/cake/2015-April/000057.html Now I'd like to say something though it may seem to be a stupid idea -_- Kathleen Nichols writes: > You are taking this much too seriously. This was written in order to > write a paper. Yes, I wrote that paper so that I can graduate from university. I need 12 p= oints and that paper is worth 5. That conference is far behind top ones like SIGC= OMM, INFOCOM and MOBICOM, but what I want is getting points more quickly. I apologize since the paper is poorly written and there may be misunderstan= dings about the original algorithm, but I do not feel shamed of myself since it i= s about how to survive. Rich Brown writes: > Please don't fisk this. The paper is *way* too long to be worth a > sentence-by-sentence refutation of every inaccuracy or outright > wrong-headed understanding of Codel... :-) I have read the paper (Controlling Queue Delay) more than twice, but now I doubt whether I really understand it. As I understand, the original CoDel algorithm uses a large buffer, and it d= oes not care about delay spikes even though it's above 500ms, as long as it's g= ood flow (which means that delay arises in a moment, and then it's gone forever= ) And as I understand, the second point of CoDel is that it supresses bad flo= ws even if it just brings 50ms delay. This strategy is acceptable, but may not satisfy everyone. What if some (ho= me) user just simply want low latency all the time at home? Configuring a fifo queue with small buffer (e.g., bfifo 100KB) works, and of course throughout will be affected. Configuring a CoDel queue with small buffer (e.g., 100KB) is better. Think about it. In this case we don't let large burst pass immediately: good flows will be supressed, which is same as bfifo 100KB. However, if there's chance that some bad flow lies in the 100KB buffer, it is also supressed. That's why I think it's better than bfifo. But wait, it's stranged that we just cannot set a "100KB buffer" with the the current CoDel implementation - it's based on number of packets. That's why I wrote the patch. I think it's simple but useful (as least for some users). Kathleen Nichols writes: > Gee, I thought the code was copyrighted. This patch is neither copyrighted not patent protected. Its license on Google Code has always been GPLv3. A conference paper published does not affect it. On 16 Apr, 2015, at 14:50, Toke H=C3=B8iland-J=C3=B8rgensen = wrote: > Surely, 4Mbps is enough for everybody? A typical 720p (100 min) film encoded in h264/aac have a size of around 4GB= , so the bitrate is about 5.3 Mbps. And now h265 is coming... What I mean is that 4Mbps is enough for 720 video (if RTT is 500ms with single-thread tcp transfer). Of course it cannot support 1080p or 4k. For high rtt environments, a simple way to work around is to use multi-thre= aded downloader. Also, the tcp implementions on today's OS are somewhat different from new-reno used in ns-2 simulator. The performance of hlc in real-world shoul= d be further tested. On 4/16/15 5:00 AM, Jonathan Morton wrote: > > But in general AQM can=E2=80=99t be used to solve that problem without al= so > suffering poor throughput; combining AQM with FQ *does* solve it. > Just like FQ is unfair to single flows competing against a swarm, but > classifying the swarm traffic into a separate traffic class fixes > that problem too. > > Which of course is why cake uses AQM, FQ *and* Diffserv, all at > once. > > The linked paper didn=E2=80=99t measure HLC against fq_codel, even though > they mention fq_codel. That=E2=80=99s a major shortcoming. > I think it's clear why I did not compare fq_codel with hlc. If I did, I sho= uld compare it with fq_hlc, not hlc. The patch is very simple. so it's also easy to write a fq_hlc patch. I did = not compare fq_codel with fq_hlc because I do not want to study how fair queuin= g improves latency, which is a factor not related to that paper. For outgoing traffic in home networks (which is the focus of that paper), I think that the sfq implemented decades ago can solve most problems if it'= s furthur tuned. I remember that sfq can hold up to 128 packets and 128 flows. The size is t= oo small for today's embedded deivces (64+MB ram) and access links (up to 100 Mbps). However, the size was never increased probably for compability. When there are lots of bulk uploads and limited latency-sensitive packets, fair queuin= g can guarantee that flows with little traffic can be processed in time. It fails when there are hundreds of upload sessions and each session is ver= y slow, which is not a common senario. Most of the time there are only severa= l upload sessions using more than 80% of the outgoing bandwidth. In this case= , simple (tuned) sfq works fine. Then one day I realized that I should focus more on incoming traffic. In th= is case, traffic is received at the home gateway after passed the bottleneck. Things are different from traditional AQM (applied right at the bottleneck)= . For downtream QoS, the major problem is that we have no access to ISPs' devices. What we can do is simply to drop some packets. This is about another paper, written with much more effort.