From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A36FC3CB36 for ; Wed, 18 Apr 2018 10:30:06 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1524061805; bh=7uMs7vTrEbuyMEjet0Oyr5A2EeA4SuTfzuQpYolHVQc=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=TXTmO0Y+30Y36PatS92Ju2lyQZ45FyV7X7PTj2hV6j6l5yJDZIOqRjm0gAStJOKGk 2WGOkp0hn3NxUfNWIoywzWhhhbjoDU1Vkbq7q/+9aNJxbeG5rNF/A481dpVvu9caij HDSsjVXmU6jJCnKwiBVM926VckPXcTxUB8sCWKfAs375e2ahTr/CydN8kHnMEuFX8h jAA7x/G0L/Ea9s8HyqFLq6ofiBFuuLoi28BbHuX0cKoegeL/bscmztyQbcyHjTJJqu v5Js44n+4cTdgsCIU+oTXryIWygeniSh9vCOLuGhafdxQp1BEqoXpLXoDntPSuht5R KcNzQiRcufQaA== To: Jonathan Morton Cc: cake@lists.bufferbloat.net In-Reply-To: <578552B2-5127-451A-AFE8-93AE9BB07368@gmail.com> References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> <87604o3get.fsf@toke.dk> <578552B2-5127-451A-AFE8-93AE9BB07368@gmail.com> Date: Wed, 18 Apr 2018 16:30:05 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87r2nc1taq.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Apr 2018 14:30:06 -0000 Jonathan Morton writes: >> On 18 Apr, 2018, at 2:25 pm, Toke H=C3=B8iland-J=C3=B8rgensen wrote: >>=20 >> So if there is one active bulk flow, we allow each flow to queue four >> packets. But if there are ten active bulk flows, we allow *each* flow to >> queue *40* packets. > > No - because the drain rate per flow scales inversely with the number > of flows, we have to wait for 40 MTUs' serialisation delay to get 4 > packets out of *each* flow. Ah right, yes. Except it's not 40 MTUs it's 40 quantums (as each flow will only dequeue a packet each MTU/quantum rounds of the scheduler).=20 > Without that, we can end up with very high drop rates which, in > ingress mode, don't actually improve congestion on the bottleneck link > because TCP can't reduce its window below 4 MTUs, and it's having to > retransmit all the lost packets as well. That loses us a lot of > goodput for no good reason. I can sorta, maybe, see the point of not dropping packets that won't cause the flow to decrease its rate *in ingress mode*. But this is also enabled in egress mode, where it doesn't make sense. Also, the minimum TCP window is two packets including those that are in flight but not yet queued; so allowing four packets at the bottleneck is way excessive. > So I do accept the increase in intra-flow latency when the flow count > grows beyond the link's capacity to cope. TCP will always increase its bandwidth above the link's capacity to cope. That's what TCP does. > It helps us keep the inter-flow induced latency low What does this change have to do with inter-flow latency? > while maintaining bulk goodput, which is more important. No, it isn't! Accepting a factor of four increase in latency to gain a few percents' goodput in an edge case is how we got into this whole bufferbloat mess in the first place... -Toke