From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id CE0CB3CB37 for ; Wed, 18 Apr 2018 07:25:32 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1524050731; bh=6kvmpjGSAwEi3xeWPUzdOn5GzCBTGq4FP0Jn8TRKyu0=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=crw4G/UQl8CNAh4xgVlosL0Qa/x44I+pFPd8CebOvoxTUcuAgh2PuDz1vZRl//tKU dNLNztQpROUMICMOxw1gG+exruHkhH83TrLuf1PnzaZW+oSZdV6dPz5jMjO4uWsPAz hKu1VR93UHX/KgRLQqm7YSbFM2lU6UDQejCXdR5y3hV4HyNMaTG/3A06wcGrO17Bvh J9v4zndc0Y1Cfo6MY5GMWpBsBJ4P0mEKxnqdoj1/fPwjOEha3zGPlRfd9pJ6qrqmbT zTqD1fGATij++ptXDYN49yjAsZ8MkkBOhHjuTH3twY76VrdXbYq5Uj/RAS5yRzj8kc s9lKd6GZum+aQ== To: Jonathan Morton Cc: cake@lists.bufferbloat.net In-Reply-To: <874lk9533l.fsf@toke.dk> References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> Date: Wed, 18 Apr 2018 13:25:30 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87604o3get.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Apr 2018 11:25:33 -0000 Toke H=C3=B8iland-J=C3=B8rgensen writes: > Jonathan Morton writes: > >>> On 17 Apr, 2018, at 12:42 pm, Toke H=C3=B8iland-J=C3=B8rgensen wrote: >>>=20 >>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel >>> controls TCP flow latency to around 65 ms, while for Cake it is all >>> the way up around the 180ms mark. Is the Codel version in Cake too >>> lenient, or what is going on here? >> >> A recent change was to increase the target dynamically so that at >> least 4 MTUs per flow could fit in each queue without AQM activity. >> That should improve throughput in high-contention scenarios, but it >> does come at the expense of intra-flow latency when it's relevant. > > Ah, right, that might explain it. In the 128 flow case each flow has > less than 100 Kbps available to it, so four MTUs are going to take a > while to dequeue... OK, so I went and looked at the code and found this: bool over_target =3D sojourn > p->target && sojourn > p->mtu_time * bulk_flows * 4; Which means that we scale the allowed sojourn time for each flow by the time of four packets *times the number of bulk flows*. So if there is one active bulk flow, we allow each flow to queue four packets. But if there are ten active bulk flows, we allow *each* flow to queue *40* packets. This completely breaks the isolation of different flows, and makes the scaling of Cake *worse* than plain CoDel. So why on earth would we do that? -Toke