From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 715603CB37 for ; Thu, 19 Apr 2018 05:22:20 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1524129739; bh=vQlGzhIPk88zcdC5Bj2DUNxzaZv4C1zbnGct1om8g5M=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=j3w627YNHCE0q4ApuTt5RsDC3ewMiYCpagIdUrzkFEkGMOKpQcNj4qbp68kJc8wAp AP3MYfQDC2XB0ETP8k25fcn+4VXVVqmirir7dKTzCyERJw+IwPw9QvNjQjmAUyuDWA g1tyDU/6Ro+zmMpMcUPqqarkspg9P1WawlWFl2Y1UdGUv9n/p/4u1zrKNIBoZrXTFy PK8RvOIRl4klq2vb7AEjVzQ0fSVeYeDSWVC+uAfJaL4HNqU6QPgHlbTk1efLA7wID+ UjKCMqDwBDwRY3ZXv5z9sPbMUgCby3LlhG/+IU7hXDkvgJFBe999fwmJeGxvhSKPj7 u8Pbiv7JdeiLg== To: Jonathan Morton , David Lang Cc: cake@lists.bufferbloat.net In-Reply-To: <7EC1A95B-B398-451D-A234-E9C43DC34829@gmail.com> References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> <87604o3get.fsf@toke.dk> <578552B2-5127-451A-AFE8-93AE9BB07368@gmail.com> <87r2nc1taq.fsf@toke.dk> <0BB8B1FD-6A00-49D6-806E-794BD53A449F@gmx.de> <3457DD8E-0292-4802-BD1E-B37771DCADA2@gmail.com> <87fu3s1om2.fsf@toke.dk> <5BD20E12-2408-4393-8560-3FDA52D86DB3@gmail.com> <7EC1A95B-B398-451D-A234-E9C43DC34829@gmail.com> Date: Thu, 19 Apr 2018 11:22:18 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <871sfb1rg5.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Apr 2018 09:22:20 -0000 Jonathan Morton writes: >>> I'm saying that there's a tradeoff between intra-flow induced latency and packet loss, and I've chosen 4 MTUs as the operating point. >> >> Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc? > > To be more precise, I'm using a sojourn time equivalent to 4 MTU-sized > packets per bulk flow at line rate, as a modifier to existing AQM > behaviour. > > The worst case for packet loss within the AQM occurs when the inherent > latency of the links is very low but the available bandwidth per flow > is also low. This is easy to replicate using a test box flanked by > GigE links to endpoint hosts; GigE has sub-millisecond inherent > delays. In this case, the entire BDP of each flow exists within the > queue. > > A general recommendation exists for TCP to use a minimum of 4 packets > in flight, in order to keep the ack-clock running smoothly in the face > of packet losses which might otherwise trigger an RTO (retransmit > timeout). This allows one packet to be lost and detected by the > triple-repetition ACK method, without SACK. But for triple-dupack to work you actually need to drop packets (the first one, to be precise), not let it sit around in a bloated queue and wait for precisely RTO timeout. If you turn off the AQM entirely for the first four packets, it is going to activate when the fifth packet arrives, resulting in a tail loss and... an RTO! -Toke