* [Make-wifi-fast] Scaling airtime weight in dynamic mode
@ 2021-08-25 11:36 Joachim Bodensohn
2021-09-07 19:40 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 3+ messages in thread
From: Joachim Bodensohn @ 2021-08-25 11:36 UTC (permalink / raw)
To: make-wifi-fast; +Cc: Sebastian Limberg
[-- Attachment #1: Type: text/plain, Size: 1607 bytes --]
Hello there,
we did some tests with airtime scheduling in dynamic mode and found, that dynamically calculated weights (256, 512, 768, ..) had no effect upon airtime scheduling and resulting traffic throughput.
The issue seems to be similar as reported by https://lists.bufferbloat.net/pipermail/make-wifi-fast/2020-September/002933.html , because when we made tests with static mode, we had to scale weights to higher values e.g., 45000 and 15000 and set AQL threshold of 5000 and limits per AC 0/5000 to observe the expected outcome.
It seems, that one need some scaling factor in dynamic mode, which scales the results of the dynamic weight calculations to higher values.
Does someone have any idea on how to solve this problem e.g., by scaling the dynamically calculated weights to appropriate values in dynamic mode?
Cheers,
Joachim
Settings:
Measurements have been done with openwrt snapshot releases on "pcengines apu2 with Compex WLE900VX 802.11ac and ath10k" as well as with ubuntu 20.04 with kernel 5.13.0, hostapd 2.9 compiled from scratch and wifi card "QCA986x/988x 802.11ac" with ath10k.
Measurements have been done with four iperf3probes which had been assigned to multi-bss respectively.
Some results are reported in attachments below.
Attachment 1 shows example of measurements in dynamic mode
Attachment 2 shows example of measurements in static mode
-------------------------------------------------------------------------------------------
Joachim Bodensohn, Adiccon GmbH, Telefon: +49 (0) 6151 500 777 - 30, E-Mail: <mailto:joachim.bodensohn@adiccon.de>
[-- Attachment #2: attachment_1_dynamic_mode.pdf --]
[-- Type: application/pdf, Size: 393021 bytes --]
[-- Attachment #3: attachment_2_static_mode.pdf --]
[-- Type: application/pdf, Size: 446196 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Make-wifi-fast] Scaling airtime weight in dynamic mode
2021-08-25 11:36 [Make-wifi-fast] Scaling airtime weight in dynamic mode Joachim Bodensohn
@ 2021-09-07 19:40 ` Toke Høiland-Jørgensen
[not found] ` <FRYP281MB07664DEB0B039187678294A985DB9@FRYP281MB0766.DEUP281.PROD.OUTLOOK.COM>
0 siblings, 1 reply; 3+ messages in thread
From: Toke Høiland-Jørgensen @ 2021-09-07 19:40 UTC (permalink / raw)
To: Joachim Bodensohn, make-wifi-fast; +Cc: Sebastian Limberg
Joachim Bodensohn via Make-wifi-fast
<make-wifi-fast@lists.bufferbloat.net> writes:
> Hello there,
>
> we did some tests with airtime scheduling in dynamic mode and found,
> that dynamically calculated weights (256, 512, 768, ..) had no effect
> upon airtime scheduling and resulting traffic throughput.
>
> The issue seems to be similar as reported by
> https://lists.bufferbloat.net/pipermail/make-wifi-fast/2020-September/002933.html
> , because when we made tests with static mode, we had to scale weights
> to higher values e.g., 45000 and 15000 and set AQL threshold of 5000
> and limits per AC 0/5000 to observe the expected outcome.
>
> It seems, that one need some scaling factor in dynamic mode, which
> scales the results of the dynamic weight calculations to higher
> values. Does someone have any idea on how to solve this problem e.g.,
> by scaling the dynamically calculated weights to appropriate values in
> dynamic mode?
AFAICT you're running TCP-based tests, right? What usually happens with
TCP is that the queues oscillate between full and empty, which means
that the AP often only has a single station that's backlogged, which
doesn't give it a lot to enforce fairness between.
I *think* that the reason it works to scale up all the weights is that
it basically corresponds to raising the quantum of the round-robin
scheduler, so it spins longer before it lets a station transmit (thus
giving queues time to fill up).
And the lowering of the AQL threshold pushes the queues up into the
stack from the firmware, also giving the scheduler more to push back on.
With the settings you quoted, you're basically setting a total limit for
the interface of 5 ms of airtime outstanding. I'd be interested in
hearing whether you have see any ill effects on total throughput when
doing this (disregarding any fairness settings)?
As for the test itself, it would be great if you could try it on a 5.14
kernel as well. We reworked how the fairness scheduler works there, in
this commit:
https://git.kernel.org/torvalds/c/2433647bc8d9
The result of this is that there's no longer a round-robin scheduler to
increase the quantum of, so I don't think the scaling of the weights
will make much difference. On the other hand, it also uses a time-based
notion of when a station was last active, which may help resolve your
initial issue without any scaling applied...
-Toke
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-09-21 11:30 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-25 11:36 [Make-wifi-fast] Scaling airtime weight in dynamic mode Joachim Bodensohn
2021-09-07 19:40 ` Toke Høiland-Jørgensen
[not found] ` <FRYP281MB07664DEB0B039187678294A985DB9@FRYP281MB0766.DEUP281.PROD.OUTLOOK.COM>
[not found] ` <87v9325bzq.fsf@toke.dk>
[not found] ` <FRYP281MB0766177BBFAAB14E1B7AF78385DB9@FRYP281MB0766.DEUP281.PROD.OUTLOOK.COM>
[not found] ` <87mtod5yc3.fsf@toke.dk>
[not found] ` <FRYP281MB076655B75EDBB6470743EDBA85DC9@FRYP281MB0766.DEUP281.PROD.OUTLOOK.COM>
2021-09-21 11:30 ` Joachim Bodensohn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox