From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sonic316-21.consmr.mail.ne1.yahoo.com (sonic316-21.consmr.mail.ne1.yahoo.com [66.163.187.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 1C7933BA8E for ; Sat, 11 Nov 2017 22:49:02 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1510458542; bh=1HB8sCbOz+sKN9/5tnkBaCAjr0U+szyf8Smh7EG2BbE=; h=Subject:To:References:From:Date:In-Reply-To:From:Subject; b=Ht/EBcEw/e3VI9l9PVZ4ReHsSaNyfwMZbZ+2iGzeQMaXzJabZKorP7GJ0KBGmIehdAnw721Ug3mHzUmnm/Zt0JXamvnR4NF6FS1H9T+Q8LXqpQiETHFyK/smb8IN0SSavdFTeoBxdo6ubLCgz64m2KQVsa6bE6r5XUIfJc3UFWK1BRPOZ3SVMW0e7vtfHyBraj1PumhmDGjxOZ33WMG068Os1HrRPNzdsoQ2W45wu+nlL1mnqBK11ueLEC1GAo4NdG1M9Nsig7re4QhQgNzh9sIsuKfsyze6dq96/f0BobDeAa9H51Y97T4AMY0PbteXz6ioem6b0EXDOVlRSAfumg== X-YMail-OSG: _ZZnAxQVM1kJ8o3cnTqrEzLJv8VMfGfMdoUhg2I5Lzi_olzJmspnKvVB4h4YxuR T20EMwEA42lx4XoF4AK0CLnub4sBzBoljyoAc_nczKgWsK.KsreMrQudAWFBCIdirzMLiwtZVFoc g5i83fQu7fmGqIV2OPjBFHoA5K6a3J4InJzLe8NHqXZUZGgi9JzPYkb0iZvi4VGBPikiKgZGcHPj a2TqYpVl2zrul3JcOh9qyHZvFU0pWbQlhelTx4rpK8cEF6Sre.5C0VT6it_QWpYiaEqaq8W_7LbD PFKpKGcn03HShPUvuYZ.G19P2T18yH6SLQIiD0U_P9wgtbU45Erad4eXQlF5VTPzQRpvtSTPrpAj DYWsKJmWahstjb2k423XFMxi8AXdbyJw.ar_K5F0auZGHl1R7Dg2c3rUD1nT3LFA.pYlsDeXjqO8 7R70zlicGkxxPSNnZlcoo7lRPiu1eMy3nK1nuypjKirVXU8P9SHOQBooitCwFbh2CUPy5U0vYqS1 HJdDcFUN2zg1Oel4bB2qxUtLNDyA42CWeTSJeBohRMyktUHuwMqxQZ1YXpU4tNhoyXZEAJREJFra E Received: from sonic.gate.mail.ne1.yahoo.com by sonic316.consmr.mail.ne1.yahoo.com with HTTP; Sun, 12 Nov 2017 03:49:02 +0000 Received: from [127.0.0.1] by smtp115.mail.ne1.yahoo.com with NNFMP; 12 Nov 2017 03:48:59 -0000 X-Yahoo-Newman-Id: 390246.97806.bm@smtp115.mail.ne1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: _ZZnAxQVM1kJ8o3cnTqrEzLJv8VMfGfMdoUhg2I5Lzi_olz JmspnKvVB4h4YxuRT20EMwEA42lx4XoF4AK0CLnub4sBzBoljyoAc_nczKgW sK.KsreMrQudAWFBCIdirzMLiwtZVFocg5i83fQu7fmGqIV2OPjBFHoA5K6a 3J4InJzLe8NHqXZUZGgi9JzPYkb0iZvi4VGBPikiKgZGcHPja2TqYpVl2zru l3JcOh9qyHZvFU0pWbQlhelTx4rpK8cEF6Sre.5C0VT6it_QWpYiaEqaq8W_ 7LbDPFKpKGcn03HShPUvuYZ.G19P2T18yH6SLQIiD0U_P9wgtbU45Erad4eX QlF5VTPzQRpvtSTPrpAjDYWsKJmWahstjb2k423XFMxi8AXdbyJw.ar_K5F0 auZGHl1R7Dg2c3rUD1nT3LFA.pYlsDeXjqO87R70zlicGkxxPSNnZlcoo7lR Piu1eMy3nK1nuypjKirVXU8P9SHOQBooitCwFbh2CUPy5U0vYqS1HJdDcFUN 2zg1Oel4bB2qxUtLNDyA42CWeTSJeBohRMyktUHuwMqxQZ1YXpU4tNhoyXZE AJREJFraE X-Yahoo-SMTP: 6sUo5IiswBDB2TZm6JKJ6DaI.Rsz4O0- Subject: Re: [Cake] total download rate with many flows To: Jonathan Morton , cake@lists.bufferbloat.net References: <87po92aku1.fsf@nemesis.taht.net> <87efph4q0y.fsf@nemesis.taht.net> <87vaihkgr9.fsf@nemesis.taht.net> From: George Amanakis Message-ID: <3cfb7852-ac1f-9274-a10e-2d8c6981922f@yahoo.com> Date: Sat, 11 Nov 2017 22:48:56 -0500 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-List-Received-Date: Sun, 12 Nov 2017 03:49:03 -0000 I totally understand what you are saying. However, I believe cake's egress and ingress modes currently behave as two extremes. One could argue that neither of them is the golden mean. With a patch in ingress mode (see below) and a single host using 32 flows to download I managed to increase throughput from ~7Mbps to ~10Mbps (configured limit 12200kbps) while latency increased from ~10ms to ~50ms, which would still be acceptable. As a comparison egress mode in the same setup gives me throughput ~11.5Mbps and latency ~500ms. I would like to hear your thoughts about this idea: the patch is incrementing q->time_next_packet for dropped packets differently than for passed-through ones. Please focus on the idea, not the actual implementation :) (also pasted in https://pastebin.com/SZ14WiYw) =============8<============= diff --git a/sch_cake.c b/sch_cake.c index 82f264f..a3a4a88 100644 --- a/sch_cake.c +++ b/sch_cake.c @@ -769,6 +769,7 @@ static void cake_heapify_up(struct cake_sched_data *q, u16 i)  }  static void cake_advance_shaper(struct cake_sched_data *q, struct cake_tin_data *b, u32 len, u64 now); +static void cake_advance_shaper2(struct cake_sched_data *q, struct cake_tin_data *b, u32 len, u64 now);  #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 8, 0)  static unsigned int cake_drop(struct Qdisc *sch) @@ -1274,7 +1275,7 @@ retry:                 /* drop this packet, get another one */                 if(q->rate_flags & CAKE_FLAG_INGRESS) {                         len = cake_overhead(q, qdisc_pkt_len(skb)); -                       cake_advance_shaper(q, b, len, now); +                       cake_advance_shaper2(q, b, len, now);                         flow->deficit -= len;                         b->tin_deficit -= len;                 } @@ -1286,8 +1287,6 @@ retry:                 qdisc_qstats_drop(sch);                 kfree_skb(skb);  #endif -               if(q->rate_flags & CAKE_FLAG_INGRESS) -                       goto retry;         }         b->tin_ecn_mark += !!flow->cvars.ecn_marked; @@ -1351,6 +1350,24 @@ static void cake_advance_shaper(struct cake_sched_data *q, struct cake_tin_data         }  } +static void cake_advance_shaper2(struct cake_sched_data *q, struct cake_tin_data *b, u32 len, u64 now) +{ +       /* charge packet bandwidth to this tin, lower tins, +        * and to the global shaper. +        */ +       if(q->rate_ns) { +               s64 tdiff1 = b->tin_time_next_packet - now; +               s64 tdiff2 = (len * (u64)b->tin_rate_ns) >> b->tin_rate_shft; +               s64 tdiff3 = (len * (u64)q->rate_ns) >> q->rate_shft; + +               if(tdiff1 < 0) +                       b->tin_time_next_packet += tdiff2; +               else if(tdiff1 < tdiff2) +                       b->tin_time_next_packet = now + tdiff2; + +               q->time_next_packet += (tdiff3*27)>>5; +       } +}  static void cake_reset(struct Qdisc *sch)  {         u32 c; =============8<============= On 11/10/2017 4:50 PM, Jonathan Morton wrote: > > In fact, that's why I put a failsafe into ingress mode, so that it > would never stall completely.  It can happen, however, that throughput > is significantly reduced when the drop rate is high. > > If throughput is more important to you than induced latency, switch to > egress mode. > > Unfortunately it's not possible to guarantee both low latency and high > throughput when operating downstream of the bottleneck link.  ECN > gives you better results, though. > > - Jonathan Morton >