From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 3B7D63B2A4 for ; Sat, 7 Sep 2019 20:04:12 -0400 (EDT) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 27B2320B0D; Sat, 7 Sep 2019 20:04:12 -0400 (EDT) Received: from imap2 ([10.202.2.52]) by compute4.internal (MEProxy); Sat, 07 Sep 2019 20:04:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=althea.net; h= mime-version:message-id:in-reply-to:references:date:from:to:cc :subject:content-type; s=fm1; bh=MLX5oeb7oJzek/ZErLh1R4NTKmhktAf 6vP/NQjGvAYE=; b=cfhHAB5mp0qz4zfVSgIGBt39/MGFFUFUtYQ+E1jXSnK1Aqe KuEfg1ArJij2QjTT6NNgCgaMP4svS9U0Plqc1VelJnKidJ36FzIhqx8F4nUIoejx WlQ2bfY++lkcYl67EWknnhJHiKHYiuufKl4w4m0KGLYH7t+e3W8rf7DH46/+utwE kiflYDUNoDZ964DP2DaB23EHGuchJvKra3g12gKeA5kG65PGV85kBCyfFOKG4zLP W+evDRt0Z51JaWBq7R9wbuZeUIEZrnIlJbJvS3XUhXAmySDLNIT99s9hryxwpPaG qky74UA03g+rcVe8/ST/W26hdYnMq/lv6R9pddw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=MLX5oe b7oJzek/ZErLh1R4NTKmhktAf6vP/NQjGvAYE=; b=PRFIornLFCu8TTNdNVPzuu kQ47LTOjUXGSnSsta5b25AapLoneMl8Ym4O847pm0XE4qbcdoQ+QBFr7eExsCF5i 0UCjN+5AKENyPguP6+0QITpTLRxoPjgGU2AcXY8i0hAWgL0ZxImO3hg0y1QKMycR z9YECPc3SGOVX6/8MEURROMAKjrOThNkqG811vgbY98RmyQosF1uF8NTudYqwJhV C/XIR8WEip455m2KgN7TSnwtCVUhZ3hBpct7wDCsXkt3nBkW+385JB8ZQscwzScI ros0GQ8FneGrnnxZPDWaEEdgdqhEWf0Ak9BW/sAWBUFL23TXHuVf78vOy1hg61aQ == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduvddrudekvddgvdeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefofgggkfgjfhffhffvufgtsehttd ertderredtnecuhfhrohhmpedflfhushhtihhnucfmihhlphgrthhrihgtkhdfuceojhhu shhtihhnsegrlhhthhgvrgdrnhgvtheqnecurfgrrhgrmhepmhgrihhlfhhrohhmpehjuh hsthhinhesrghlthhhvggrrdhnvghtnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: by mailuser.nyi.internal (Postfix, from userid 501) id D77FCE00A3; Sat, 7 Sep 2019 20:04:11 -0400 (EDT) X-Mailer: MessagingEngine.com Webmail Interface User-Agent: Cyrus-JMAP/3.1.7-188-g385deb1-fmstable-20190905v2 Mime-Version: 1.0 Message-Id: <9a90111b-2389-4dc6-8409-18c40f895540@www.fastmail.com> In-Reply-To: References: <2825CE14-2109-4580-A086-9701F4D3ADF0@gmail.com> <18b1c174-b88d-4664-9aa8-9c42925fc14c@www.fastmail.com> Date: Sat, 07 Sep 2019 20:03:50 -0400 From: "Justin Kilpatrick" To: "Jonathan Morton" Cc: cake@lists.bufferbloat.net Content-Type: text/plain Subject: Re: [Cake] Fighting bloat in the face of uncertinty X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Sep 2019 00:04:12 -0000 Sadly this isn't a driver, its a point to point wireless device that often seems designed to introduce bloat. I could probably ssh in and configure it to behave properly but that's not very scalable. Underestimating link capacity dramatically isn't an option as no matter how buttery smooth the experience people still crave those high speedtest numbers. > In fact I'm unsure as to why changing the AQM parameters would cure it. > You may have benefited from an unintentional second-order effect which > we normally try to eliminate, when the 'target' parameter gets too > close to the CPU scheduling latency of the kernel. So you believe that setting the target RTT closer to the path latency was not the main contributor to reducing bloat? Is there a configuration I could use to demonstrate that one way or the other? -- Justin Kilpatrick justin@althea.net On Sat, Sep 7, 2019, at 7:42 PM, Jonathan Morton wrote: > > On 8 Sep, 2019, at 2:31 am, Justin Kilpatrick wrote: > > > > If I set a throughput that's 50% too high should it still help? In my testing it didn't seem to. > > In that case you would be relying on backpressure from the network > interface to cause queuing to actually occur in Cake rather than in the > driver or hardware (which would almost certainly be a dumb FIFO). If > the driver doesn't implement BQL, that would easily explain 300ms of > bloat. > > In fact I'm unsure as to why changing the AQM parameters would cure it. > You may have benefited from an unintentional second-order effect which > we normally try to eliminate, when the 'target' parameter gets too > close to the CPU scheduling latency of the kernel. > > I generally find it's better to *underestimate* the bandwidth parameter > by 50% than the reverse, simply to keep the queue out of the dumb > hardware. But if you want to try implementing BQL in the relevant > drivers, go ahead. > > - Jonathan Morton