From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id B379B3B29E for ; Sat, 4 Nov 2017 18:06:20 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1509833178; bh=ubCjhat5vFQcHA3KASFbiCVloAHKfhebxfPNYYkPxVo=; h=From:To:Subject:In-Reply-To:References:Date:From; b=o5CxV4Uy5ANfeUWH/oyIcY1N7yiKulmPZb8RIIUQ17SZybry49VuXUpsus3cB+OqF 7x6HmylCz9ItShbjeOtxBTtIuRHWqm1YvCyASlHATliCkhAP10SkmSBhFgDrabRea0 arRuKTyrOVBugto/Eu2+j+XqCZqIgJdIfvu+TiUyKfeQi+p/SebUySyQRSLO9uJhuI mEo+IjhUh700fYN5y0xvTax6EVcLRzFGvZ34x2tdGvWNTbV2dzfpm2dCsdLe5NyPsr ojZMrwVg6K0YjKJrBCuO/oh6J3neKzh5FzGqKcqFR6FmlEVqEzUVmpHFni8fm/rUTn OpSPiWHA1tgbQ== To: Matthias Tafelmeier , bloat@lists.bufferbloat.net In-Reply-To: <3f789193-f491-8313-5f10-ef1bf73684f2@gmx.net> References: <3f789193-f491-8313-5f10-ef1bf73684f2@gmx.net> Date: Sat, 04 Nov 2017 23:05:47 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <877ev5n1hg.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Bloat] generic tcp - window scaling limits X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 04 Nov 2017 22:06:21 -0000 Matthias Tafelmeier writes: > Hello, > > before bringing it forward to LNX netdev and with the risk of restating > something, I wanted to hearken for the take of this here, since it's > also bloating related - when looking at it from the link > clogging/flowyness point of view. > > I first surmised some DQL (as done for driver rings - BQL) introduction > to the TCP hard limit could improve perceived flow latency - though the > current hard limit is perfectly enough in conjunction with the window > advertisement/scaling. > > As of what I measured here > > https://matthias0tafelmeier.wordpress.com/2017/08/24/linux-tcp-window-scaling-quantification-rmemwmem/ Erm, what exactly are you trying to show here? As far as I can tell from the last (1-flow) plot, you are saturating the link in all the tests (and indeed the BDP for a 1Gbps with 2ms RTT is around 250kb), which means that the TCP flow is limited by cwin and not rwin; so I'm not sure you are really testing what you say you are. I'm not sure why the latency varies with the different tests, though; you sure there's not something else varying? Have you tried running Flent with the --socket-stats option and taking a look at the actual window each flow is using? -Toke