From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [IPv6:2001:470:dc45:1000::1]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 795DB3CB37 for ; Wed, 18 Apr 2018 09:21:24 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1524057683; bh=AjrF0vlbCTK7K/oyHmRLue1yT8Li0Dc597ATNYZoFxY=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=hnsZ8mxd+nDIax1Z6VmiBtBCyt4SR0/dtIE+eCzM5CgJ+Ya55r9SlAWhYx7FzMbyO zLDC7FokDOiA08HNu/Lz20tq0D8SmnPbF/vSP9dfSJy5tOE1t1LDB5Z6lpY1exCaql 7IbbsTGd2nS6G9eWQLw1tWXMgR/QXGTrk/8wsZLeDkMNGaMkKa9i7z117b/giU4zz2 4UMV0UNi1tdhfvuLaAnDF19Ej+U/cij1Zkrv4WhrjQf9K80QHmuDWxsDh9XG+VzWKD QVSjFr4Jl/4rvxfZpfiC5saO2r4cKyCb1WoAAst1ZQffUBRuIGclMP6OBVJB/oQLtA R0wMreAneLECg== To: Jonas =?utf-8?Q?M=C3=A5rtensson?= Cc: Jonathan Morton , Cake List In-Reply-To: References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> <87604o3get.fsf@toke.dk> Date: Wed, 18 Apr 2018 15:21:22 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87zi201wh9.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Apr 2018 13:21:24 -0000 Jonas M=C3=A5rtensson writes: > On Wed, Apr 18, 2018 at 1:25 PM, Toke H=C3=B8iland-J=C3=B8rgensen > wrote: > >> Toke H=C3=B8iland-J=C3=B8rgensen writes: >> >> > Jonathan Morton writes: >> > >> >>> On 17 Apr, 2018, at 12:42 pm, Toke H=C3=B8iland-J=C3=B8rgensen >> wrote: >> >>> >> >>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel >> >>> controls TCP flow latency to around 65 ms, while for Cake it is all >> >>> the way up around the 180ms mark. Is the Codel version in Cake too >> >>> lenient, or what is going on here? >> >> >> >> A recent change was to increase the target dynamically so that at >> >> least 4 MTUs per flow could fit in each queue without AQM activity. >> >> That should improve throughput in high-contention scenarios, but it >> >> does come at the expense of intra-flow latency when it's relevant. >> > >> > Ah, right, that might explain it. In the 128 flow case each flow has >> > less than 100 Kbps available to it, so four MTUs are going to take a >> > while to dequeue... >> >> OK, so I went and looked at the code and found this: >> >> bool over_target =3D sojourn > p->target && >> sojourn > p->mtu_time * bulk_flows * 4; >> >> >> Which means that we scale the allowed sojourn time for each flow by the >> time of four packets *times the number of bulk flows*. >> >> So if there is one active bulk flow, we allow each flow to queue four >> packets. But if there are ten active bulk flows, we allow *each* flow to >> queue *40* packets. > > > I'm confused. Isn't the sojourn time for a packet a result of the > total number of queued packets from all flows? If each flow were > allowed to queue 40 packets, the sojourn time would be mtu_time * > bulk_flows * 40, no? No, the 40 in my example came from the bulk_flows multiplier. Basically, what the current code does is that it scales the AQM target by the number of active flows, so that the less effective bandwidth is available to a flow, the more lenient the AQM is going to be. Which is wrong; the AQM should signal the flow to slow down when it exceeds its available bandwidth and starts building a queue. So if the available bandwidth decreases (by more flows sharing it), the AQM is *expected* to react by sending more "slow down" signals (dropping more packets). -Toke