From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [45.145.95.4]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 9E4D73B29D for ; Tue, 7 Sep 2021 15:40:20 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1631043617; bh=+mrVXn2bIosab8EbOQqAg+aYNTdEcU1c7NkRyyt4bM8=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=mpy5fzIGZjUgkg+C1H3WKYuH+/u0dkO6qSd6j1/BymFxyIn0yKmToyQHEvPXBo1xG O+IbN/654uBgHScfS8uDlYI+isDyfW5H893x6UJ5oPnQWS3lTDurPoEfEZYa5CoS/m Tutj8Kj91eNmszqE7WuwVnntDTW07bQTKngwx3FDuEYJhXWX2GwK6WyCfzcknTbWSn f59I5rl9Nd/eQj0wz4FVb8fyqVQe6ujPIeheW+4/G/W1mqE9tawTY9mrdZqkyJJ5EW 3ooIxo7yF7GspuVbPNeeaZoFB60Y2E13Gew6YersH7KZwoIW/oJv2wjTcH0ZWzAzTq l3oBzeYP4a9xA== To: Joachim Bodensohn , "make-wifi-fast@lists.bufferbloat.net" Cc: Sebastian Limberg In-Reply-To: References: Date: Tue, 07 Sep 2021 21:40:16 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87eea0p42n.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Make-wifi-fast] Scaling airtime weight in dynamic mode X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Sep 2021 19:40:20 -0000 Joachim Bodensohn via Make-wifi-fast writes: > Hello there, > > we did some tests with airtime scheduling in dynamic mode and found, > that dynamically calculated weights (256, 512, 768, ..) had no effect > upon airtime scheduling and resulting traffic throughput. > > The issue seems to be similar as reported by > https://lists.bufferbloat.net/pipermail/make-wifi-fast/2020-September/002933.html > , because when we made tests with static mode, we had to scale weights > to higher values e.g., 45000 and 15000 and set AQL threshold of 5000 > and limits per AC 0/5000 to observe the expected outcome. > > It seems, that one need some scaling factor in dynamic mode, which > scales the results of the dynamic weight calculations to higher > values. Does someone have any idea on how to solve this problem e.g., > by scaling the dynamically calculated weights to appropriate values in > dynamic mode? AFAICT you're running TCP-based tests, right? What usually happens with TCP is that the queues oscillate between full and empty, which means that the AP often only has a single station that's backlogged, which doesn't give it a lot to enforce fairness between. I *think* that the reason it works to scale up all the weights is that it basically corresponds to raising the quantum of the round-robin scheduler, so it spins longer before it lets a station transmit (thus giving queues time to fill up). And the lowering of the AQL threshold pushes the queues up into the stack from the firmware, also giving the scheduler more to push back on. With the settings you quoted, you're basically setting a total limit for the interface of 5 ms of airtime outstanding. I'd be interested in hearing whether you have see any ill effects on total throughput when doing this (disregarding any fairness settings)? As for the test itself, it would be great if you could try it on a 5.14 kernel as well. We reworked how the fairness scheduler works there, in this commit: https://git.kernel.org/torvalds/c/2433647bc8d9 The result of this is that there's no longer a round-robin scheduler to increase the quantum of, so I don't think the scaling of the weights will make much difference. On the other hand, it also uses a time-based notion of when a station was last active, which may help resolve your initial issue without any scaling applied... -Toke