From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [IPv6:2001:470:dc45:1000::1]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 2397B3B2A4 for ; Thu, 19 Apr 2018 06:33:24 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1524134002; bh=d4xsI7et9za9pWt+asZKzCyp5+5KvxrPUZ01Tywxfxw=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=NtAmqgNgl8PS4ShXtqLi2pGsPn3nWcymr6nGWrU3aGgJHWVQrws7WUswe36YkxwGR LWzSjeJ2bNAF2FV6OYPKxSYi3M9tvHqyeMmNl13WAjlrtXWTmPglUrpPrVEijqW+MZ b/u7Q/M/hcxm1Pzym7ijPuNd1UW57GfNeTr8C9nYA6NSRPvMBDbJY9ICw6a+Pe+LJj 6HXBJEx3gsCZYhKGPr0ySU4SkaVw/Y41TKLwLwfTIzlqB8MwFk5r0xerL0kg5EzoCc C4pKaGryDRdUcAkisifAYCY+EXbXdu/9hK1BvP954ci5fX/ZiI3EYbKcFFoHpoZwXD 24XK1iJbgohxQ== To: Jonathan Morton Cc: Luca Muscariello , Cake List In-Reply-To: References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> <87604o3get.fsf@toke.dk> <578552B2-5127-451A-AFE8-93AE9BB07368@gmail.com> <87r2nc1taq.fsf@toke.dk> <0BB8B1FD-6A00-49D6-806E-794BD53A449F@gmx.de> <3457DD8E-0292-4802-BD1E-B37771DCADA2@gmail.com> <87fu3s1om2.fsf@toke.dk> <87d0yv1sfz.fsf@toke.dk> <3B813EB8-92E7-402D-8BF5-DD43971155E4@gmail.com> <87y3hjzgvz.fsf@toke.dk> Date: Thu, 19 Apr 2018 12:33:22 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87muxzzdsd.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Apr 2018 10:33:24 -0000 Jonathan Morton writes: >>>> your solution significantly hurts performance in the common case >>> >>> I'm sorry - did someone actually describe such a case? I must have >>> missed it. >> >> I started this whole thread by pointing out that this behaviour results >> in the delay of the TCP flows scaling with the number of active flows; >> and that for 32 active flows (on a 10Mbps link), this results in the >> latency being three times higher than for FQ-CoDel on the same link. > > Okay, so intra-flow latency is impaired for bulk flows sharing a > relatively low-bandwidth link. That's a metric which few people even > know how to measure for bulk flows, though it is of course important > for sparse flows. I was hoping you had a common use-case where > *sparse* flow latency was impacted, in which case we could actually > discuss it properly. > > But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow > latency? Nor packet loss, which people often do measure (or at least > talk about measuring) - quite the opposite? Nor goodput, which people > *definitely* measure and notice, and is influenced more strongly by > packet loss when in ingress mode? As I said, I'll run more tests and post more data once I have time. > The measurement you took had a baseline latency in the region of 60ms. The baseline link latency is 50 ms; which is sorta what you'd expect from a median non-CDN'en internet connection. > That's high enough for a couple of packets per flow to be in flight > independently of the bottleneck queue. Yes. As is the case for most flows going over the public internet... > I would take this argument more seriously if a use-case that mattered > was identified. Use cases where intra-flow latency matters, off the top of my head: - Real-time video with congestion response - Multiple connections multiplexed over a single flow (HTTP/2 or QUIC-style) - Anything that behaves more sanely than TCP at really low bandwidths. But yeah, you're right, no one uses any of those... /s > So far, I can't even see a coherent argument for making this tweak > optional (which is of course possible), let alone removing it > entirely; we only have a single synthetic benchmark which shows one > obscure metric move in the "wrong" direction, versus a real use-case > identified by an actual user in which this configuration genuinely > helps. And I've been trying to explain why you are the one optimising for pathological cases at the expense of the common case. But I don't think we are going to agree based on a theoretical discussion. So let's just leave this and I'll return with some data once I've had a chance to run some actual tests of the different use cases. -Toke