From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-x233.google.com (mail-qt0-x233.google.com [IPv6:2607:f8b0:400d:c0d::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id E36FA3B29E for ; Mon, 4 Dec 2017 20:38:04 -0500 (EST) Received: by mail-qt0-x233.google.com with SMTP id u42so25616100qte.7 for ; Mon, 04 Dec 2017 17:38:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=1TykBPeLkhYa3Q3v3aMM+5QMk1eK/vQXQhTRuL5Iw+g=; b=mCz2ok4pOEzQX/VJOPNcmVNLl84yJCmCTq2YT1exyv/tnYY0xqh59dYEL3hbmU5cqR EQoOmIe1iapZZgY4Y4Jb8G6DFcanV61q7Mfsxa5FCP3mXX0RxwRfsS5hiYrVwAP354xw Co0dFF8E25KYuKAMuItiks5L0tMjL6932Zxp3Uc1Q9XywXssEGUO8euUK/L3m4Mhg2Th 14EKA7MiSLSqtkoWQRm6KyyRm0aNOboJLQbSvmoCn0jaCJw5rUfGdZ0v+l33TMrWLzLH 7ZtBCQEL28lbcL4qynxPoxukU+X4irLhz8aY7InM1bHRNDwNVVStNzh//zb1ygTmjust eozg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=1TykBPeLkhYa3Q3v3aMM+5QMk1eK/vQXQhTRuL5Iw+g=; b=Lfi949ir19e/RKq6NbByqKllR5jS0mmLLi2sRTBHLVknx2qTA6pC6UTjapxRTZ5YSD B/XflUkF+0jdJQWOBdkuBcp3hj3QLK9uw9MzgSgkMhXIcLOWE0k18Uu7T4Tj2DartHRi YyELA+GKfwkbAw/jP8LNSRGdoOSbcbvT/R02TOOGoBWpC1nvJY2pjQp8+lleOSQREi1z /L8PrusHHylE4txc6UCdqI7cBriyryleb0jWTdCW6jIPpB3SrWO4+pmI9G/JZjbhIhpV kvOlLxhlFebUqnbXZsVNuWnPmQFhidH/njeqCJZq5cuKnB5NkSqwq4t5PKR9SBbd8Xcq apSA== X-Gm-Message-State: AKGB3mJw8v4pJeg6EKhOn5I3YhA9m5KOFwiNo481FUG+getEB9EpLCKz ke/Zgz4P+9etqqGYZTeYldUSkiaU0E/gOcSjsYw= X-Google-Smtp-Source: AGs4zMZrRq1xu8gcUBhXcBMZwHXDPJKMylY0JvNShfKh5dEROq21fyzpIDEmTd6Dt/mxmCrpoSlKgFUqhXoZaNTfnQk= X-Received: by 10.200.39.148 with SMTP id w20mr585066qtw.178.1512437884179; Mon, 04 Dec 2017 17:38:04 -0800 (PST) MIME-Version: 1.0 Received: by 10.140.102.179 with HTTP; Mon, 4 Dec 2017 17:38:03 -0800 (PST) Received: by 10.140.102.179 with HTTP; Mon, 4 Dec 2017 17:38:03 -0800 (PST) In-Reply-To: <1512436395.8927.10.camel@gmail.com> References: <1512426648.21759.19.camel@gmail.com> <1512436395.8927.10.camel@gmail.com> From: Jonathan Morton Date: Tue, 5 Dec 2017 03:38:03 +0200 Message-ID: To: Georgios Amanakis Cc: Dave Taht , Cake List Content-Type: multipart/alternative; boundary="001a11c00198d4d9ca055f8de267" Subject: Re: [Cake] cake vs fqcodel with 1 client, 4 servers X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Dec 2017 01:38:05 -0000 --001a11c00198d4d9ca055f8de267 Content-Type: text/plain; charset="UTF-8" Ingress mode works by counting dropped packets, not only delivered packets, against the shaped limit. When there's a large number of non-ECN flows and a low BDP per flow, a lot of packets are dropped to try and keep the intra-flow latency in line. So the goodput tends to decrease when the flow count increases, but this is necessary to control latency. The modified failsafe ensures that at most a third of the total bandwidth will "go missing" this way. Previously, as much as three-quarters could. At that threshold, Cake stops counting dropped packets, trading a reduction in latency control for maintaining reasonable goodput. There is no more sophisticated heuristic that I can think of to achieve ingress mode's goals. However, it might be worth revisiting an old question once raised over fq_codel's use of a fixed set of Codel parameters regardless of active flow count. It was then argued that the delay target wasn't dependent on the flow count. But when the flow count is high, a fixed delay target plus the baseline latency might end up requiring a lower BDP than the sender is able to select as a congestion window (typical TCPs have a hard lower limit of 4x MSS). In that case, currently packets are being dropped for no effect on send rate. This wouldn't matter with ECN, of course. So a better fix might be to adjust the target latency according to the number of active bulk flows. Fortunately for performance, this should be a multiply, not a division. - Jonathan Morton --001a11c00198d4d9ca055f8de267 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Ingress mode works by counting dropped packets, not only del= ivered packets, against the shaped limit.=C2=A0 When there's a large nu= mber of non-ECN flows and a low BDP per flow, a lot of packets are dropped = to try and keep the intra-flow latency in line.=C2=A0 So the goodput tends = to decrease when the flow count increases, but this is necessary to control= latency.

The modified failsafe ensures that at most a third of the to= tal bandwidth will "go missing" this way.=C2=A0 Previously, as mu= ch as three-quarters could.=C2=A0 At that threshold, Cake stops counting dr= opped packets, trading a reduction in latency control for maintaining reaso= nable goodput.=C2=A0 There is no more sophisticated heuristic that I can th= ink of to achieve ingress mode's goals.

However, it might be worth revisiting an old question once r= aised over fq_codel's use of a fixed set of Codel parameters regardless= of active flow count.=C2=A0 It was then argued that the delay target wasn&= #39;t dependent on the flow count.

But when the flow count is high, a fixed delay target plus t= he baseline latency might end up requiring a lower BDP than the sender is a= ble to select as a congestion window (typical TCPs have a hard lower limit = of 4x MSS).=C2=A0 In that case, currently packets are being dropped for no = effect on send rate.=C2=A0 This wouldn't matter with ECN, of course.

So a better fix might be to adjust the target latency accord= ing to the number of active bulk flows.=C2=A0 Fortunately for performance, = this should be a multiply, not a division.

- Jonathan Morton

--001a11c00198d4d9ca055f8de267--