[Make-wifi-fast] [Cake] Flent results for point-to-point Wi-Fi on LEDE/OM2P-HS available

Toke Høiland-Jørgensen toke at toke.dk
Fri Feb 17 04:52:38 EST 2017


Pete Heist <peteheist at gmail.com> writes:

>  On Feb 16, 2017, at 10:03 PM, Sebastian Moeller <moeller0 at gmx.de> wrote:
>
>  On Feb 16, 2017, at 18:19, Jonathan Morton <chromatix99 at gmail.com> wrote:
>
>  In a sense if there are thresholds for permissible VO/VI traffic fractions below which the AP will not escalate its own priority this will come close to throttling the high priority senders, no? 
>
>  I thought Aaron’s suggestion sounds both sensible and not difficult to implement. That way we wouldn’t even have to regularly monitor it, and anyone who is marking all their packets thinking they’re doing themselves a favor is just limiting their max throughput.
>
>  Could there be another keyword in Cake to do this automatically, say “fairdiffserv", or would this just be feature bloat for what is already a sophisticated shaper? I don’t know if there are sensible mappings from dscp value to max percentage throughput that would work most of
>  the time, or if there could also be an adjustable curve parameter that controls the percentage backoff as you go up dscp levels.
>
>  This is actually what Cake already does by default (the “diffserv3” mode).  If you look at the detailed statistics (tc -s qdisc), you’ll see that each tin has a “threshold” bandwidth.  If there’s more traffic than that threshold in that tin, the tin will be deprioritised - it can still use all of the
>  bandwidth left spare by other tins’ traffic, but no more than that.
>
>  Additionally, diffserv3 mode uses more aggressive AQM settings on the “voice” tin than the “best effort” tin, on the grounds that the former is a request for minimum latency.  This should also discourage bulk traffic from using unnecessarily high DSCPs.
>
>  However, in both the “besteffort” and “diffserv3” cases, the DSCP may be interpreted independently by the NIC as well as Cake.  In the case of wifi, this affects the medium grant latency and priority.  If the link isn’t saturated, this shouldn’t affect Cake’s prioritisation strategy much if
>  at all, but it does have implications for the effect of other stations sharing the frequency.
>
>  Here is part of the problem: the more aggressive airtime access of the VI and VO classes will massively cut down the actual achievable bandwidth over all classes. And I might add this effect is not restricted to the AP and station under one’s control, but all other stations and APs using
>  the same frequency that are in the close RF neighborhood will affect the available airtime and hence achievable bandwidth. If you look how wifi achieves its higher bandwidth it is by using longer periods of airtime to make up for the rather fixed time “preamble” that wastes time without
>  transmission of user data. VI users in the vicinity will drastically (IIRC) reduce the ability to send those aggregates. In other words link saturation is partly a function of which AC classes are used and not a nice and fixed entity as for typical wired connections. Now if you can control both
>  sides of your transfer _and_ all other users of the same frequency that are RF-visible to your endpoints, it might be possible to thnk of a wifi link in similar terms as a wired one, but I would be cautious…
>
> Thanks for that info. In my testing I’m focusing on point-to-point Wi-Fi, but I see the complexity that WMM presents, especially when there's more than one station.
>
> It's perplexing that at least two concerns- packet retransmission and prioritization, occur at multiple layers in the stack. 802.11 ack frames are sent in response to every data frame (aggregation aside), and retransmission occurs at this layer, but also higher up in the TCP layer. Prioritization
> can occur at the IP layer, but then again in the media layer with WMM. This seems to violate separation of concerns, and makes it difficult to ascertain and control what’s going on in the system as a whole.
>
> It feels like WMM went a step too far. There may have been (may still be) valid performance reasons for Wi-Fi to take on such concerns, but as the data rates get higher and processing power increases, it feels like it would be better to have a wireless technology that just delivers frames,
> and to push reliability, prioritization and aggregation back up into the higher layers so that long-term innovation can take place in software. The 802.11 spec is on my reading list so I might learn if and where this goes off the tracks.

Note that WMM also affects max aggregation sizes; the VO queue doesn't
do aggregation at all, for instance; and the max aggregate size for VI
is smaller than for BE. This *should* be an incentive to not use the
higher queues for bulk traffic.

That being said, I do believe there are issues with how the QoS levels
are currently handled in the Linux WiFi stack, and looking into that in
more detail is on my list somewhere... :)


-Toke


More information about the Make-wifi-fast mailing list