From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 9FA263B29E for ; Thu, 1 Nov 2018 09:25:02 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1541078701; bh=oHJhhTevcZ+afHXegfCIY+XFeRoH2EpmCB7tWP7XB4c=; h=From:To:Subject:In-Reply-To:References:Date:From; b=xjlm8o+/YVui9rDX6Lx/ys8VTTmVIrF+7SEFzdmnqN3+5BwGb+zr4Ldvr0mVINyG6 RNWfubPwzEvRB5/rEJOu8wqL0CdLenQt+noxzOeoklqlYWKGL7No90B8OSthGLcTX+ B+kUPqIkLKiIx7NMUiKSHvf2FDimDFBgtMgAnfv4uSPy9VFZ09MeFsnDW8WHVnLA0w oVTdVVz/S+a7PflS7SZfc2QVpZRIKMJl8H6zKVdxfxDisOL9Q2yCTd4Z1IObIVQ/O9 QHVhQJsetdSBALIDzG/7XtfPbgxZq/lzmWwZij7De7xC36nZz/GlyFNmGLsXPANjoX r/8ubcbbmlvLg== To: Greg White , Dave Taht , "tsvwg\@ietf.org" , "bloat\@lists.bufferbloat.net" In-Reply-To: <4F59C958-0AF9-4531-B700-0A64572E22CF@cablelabs.com> References: <878t2h1jtm.fsf@taht.net> <877ei096vi.fsf@toke.dk> <4F59C958-0AF9-4531-B700-0A64572E22CF@cablelabs.com> Date: Thu, 01 Nov 2018 14:25:01 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <874ld1q6aa.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Bloat] quick review and rant of "Identifying and Handling Non Queue Building Flows in a Bottleneck Link" X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2018 13:25:03 -0000 Greg White writes: > Hi Toke, thanks for the pointer to your paper, I had not seen it > before. You're welcome :) > I agree that could be a candidate algorithm. It is certainly simple. > It may not be the only (or perhaps even the best) solution for the > dual-queue case though. I'm thinking in the context of an L4S TCP > flow, which can respond quickly to "new" ECN markings and achieve link > saturation with ultra low (but not zero) queuing delay. A good > property for a queue protection algorithm would be that these L4S > flows could be consistently placed in the NQB queue. I think that the > simple approach you mentioned would result in many L4S flows being > deemed Queue Building. Yes, I think you are right (depending on traffic mix, of course). It might be possible to tweak it to work better, though. E.g., by changing the threshold (moving flows to QB if they end up with more than X ms of queue). This would only work if you start out all flows at NQB, with the associated aggressive marking behaviour; so I'm not sure if a normal TCP flow would ever manage to get to QB state before getting clobbered by the NQB markings... > Additionally, we've observed applications that send variable sized > "messages" at a fixed rate (e.g. 20 messages/second) where the message > sometimes exceeds the MTU and results in two closely spaced (possibly > back-to-back) packets. This is a flow that I think should be > considered to be NQB, but would get flagged as QB by the simple > approach. You described this case in your paper, where you state that > the first Q bytes of each burst will be treated as NQB (the first > packet in the case we're talking about here), but the rest will be > treated as QB. Assuming that message latency is important for these > sorts of applications, this is equivalent to saying that the entire > burst is considered as QB. In the fq_codel case, the message latency > would be something like Q(n-1)(N+1)/R (assuming no other sparse flow > arrivals), something like 1.3ms using the example values in your paper > (plus n=2, N=10) which may be ok. In the dual-queue case it is a > bigger deal, because the remaining packets would be put at the end of > the QB queue, which could have a latency of 10 or 20 ms. Sure, it's by no means a perfect mechanism. But it makes up for that by it's simplicity, IMO. And it does work really well for *a lot* of today's latency-sensitive traffic. (In your case of two-MTU messages, you could tune the quantum to allow those; but of course you can construct examples that won't work). > So, a queue protection function that provides a bit more (but still > limited) allowance for a flow to have packets in queue would likely > work better in the dual-queue case. Yeah, that's tricky, especially if you want it to be very accurate in its distinction; which I sort of gather that you do, right? -Toke