[Make-wifi-fast] [PATCH v2] mac80211: Move crypto IV generation to after TXQ dequeue.
Toke Høiland-Jørgensen
toke at toke.dk
Mon Aug 22 10:47:32 EDT 2016
Johannes Berg <johannes at sipsolutions.net> writes:
>> well, we're getting there. the results of both patch attempts were
>> really nice, and brought encrypted performance with fq back into line
>> with unencrypted. Still running crypted tests as I write...
>>
>> So fixing TKIP would be next, forcing the AP to use that? What other
>> scenarios do we have to worry about? WDS?
>>
>
> I don't think there's anything else, I just don't really feel it's
> getting anywhere. This is a mere symptom of the design.
>
> Felix had worked around the SN assignment in a similar way, but I feel
> that perhaps the whole thing isn't quite the right architecture. Why
> are we applying FQ after the wifi conversion, when clearly that doesn't
> work well? Seems to me that it would make more sense to let the frames
> sit on the queues as they come in, and do most of the wifi handling
> only when needed (obviously, things like control port would still have
> to be done).
I suppose that could be a way to do it (i.e. have ieee80211_tx_dequeue
call all the TX hooks etc), but am not sure whether there would be
problems doing all this work in the loop that's building aggregates
(which is what would happen for ath9k at least).
An alternative could be to split the process up in two: An "early" and
"late" stage, where the early stage does everything that is not
sensitive to reordering and the occasional drop, and the late stage is
everything that is. Then the queueing step could happen in-between the
two stages, and the non-queueing path could just call both stages at
once. In effect, this would just make the current work-arounds be more
explicit in the structure, rather than being marked as exceptions.
> We even count those packets that are dropped for TX statistics, which
> would seem to be a big behavioural difference vs. applying a qdisc.
While you're right in principle, in practice I don't think this has too
big of an impact. In normal operation, CoDel drops (at most) dozens of
packets per *minute*, so it's not going to skew the statistics too much.
> Now, it's unlikely to be that simple - fragmentation, for example,
> might mess this up.
>
> Overall though, I'm definitely wondering if it should be this way,
> since all the special cases just add complexity.
I agree that the work-arounds are iffy, but I do also think it's
important to keep in mind that we are improving latency by orders of
magnitude here. A few special cases is worth it to achieve that, IMO.
And then iterating towards a design that don't need them, of course :)
-Toke
More information about the Make-wifi-fast
mailing list