[Make-wifi-fast] [PATCH v3 3/6] mac80211: Add airtime accounting and scheduling to TXQs

Toke Høiland-Jørgensen toke at toke.dk
Mon Nov 19 17:44:43 EST 2018


Dave Taht <dave at taht.net> writes:

> Toke Høiland-Jørgensen <toke at toke.dk> writes:
>
>> Felix Fietkau <nbd at nbd.name> writes:
>>
>>> On 2018-11-14 18:40, Toke Høiland-Jørgensen wrote:
>>>>> This part doesn't really make much sense to me, but maybe I'm
>>>>> misunderstanding how the code works.
>>>>> Let's assume we have a driver like ath9k or mt76, which tries to keep a
>>>>> number of aggregates in the hardware queue, and the hardware queue is
>>>>> currently empty.
>>>>> If the current txq entry is kept at the head of the schedule list,
>>>>> wouldn't the code just pull from that one over and over again, until
>>>>> enough packets are transmitted by the hardware and their tx status
>>>>> processed?
>>>>> It seems to me that while fairness is still preserved in the long run,
>>>>> this could lead to rather bursty scheduling, which may not be
>>>>> particularly latency friendly.
>>>> 
>>>> Yes, it'll be a bit more bursty when the hardware queue is completely
>>>> empty. However, when a TX completion comes back, that will adjust the
>>>> deficit of that sta and cause it to be rotated on the next dequeue. This
>>>> obviously relies on the fact that the lower-level hardware queue is
>>>> sufficiently shallow to not add a lot of latency. But we want that to be
>>>> the case anyway. In practice, it works quite well for ath9k, but not so
>>>> well for ath10k because it has a large buffer in firmware.
>>>> 
>>>> If we requeue the TXQ at the end of the list, a station that is taking
>>>> up too much airtime will fail to be throttled properly, so the
>>>> queue-at-head is kinda needed to ensure fairness...
>>> Thanks for the explanation, that makes sense to me. I have an idea on
>>> how to mitigate the burstiness within the driver. I'll write it down in
>>> pseudocode, please let me know if you think that'll work.
>>
>> I don't think it will, unfortunately. For example, consider the case
>> where there are two stations queued; one with a large negative deficit
>> (say, -10ms), and one with a positive deficit.
>
> Perhaps a flag for one way or the other?
>
> if(driver->has_absurd_hardware_queue_depth) doitthisway(); else
> doitabetterway();

Well, there's going to be a BQL-like queue limit (but for airtime) on
top, which drivers can opt-in to if the hardware has too much queueing.

>> In this case, we really need to throttle the station with a negative
>> deficit. But if the driver loops and caches txqs, we'll get something
>> like the following:
>>
>> - First driver loop iteration: returns TXQ with positive deficit.
>> - Second driver loop iteration: Only the negative-deficit TXQ is in the
>>   mac80211 list, so it will loop until that TXQ's deficit turns positive
>>   and return it.
>>
>> Because of this, the negative-deficit station won't be throttled, and we
>> won't get fairness.
>>
>> How many frames will mt76 queue up below the driver point? I.e., how
>> much burstiness are you expecting this will introduce on that driver?
>>
>> Taking a step back, it's clear that it would be good to be able to
>> dequeue packets to multiple STAs at once (we need that for MU-MIMO on
>> ath10k as well). However, I don't think we can do that with the
>> round-robin fairness scheduler; so we are going to need a different
>> algorithm. I *think* it may be possible to do this with a virtual-time
>> scheduler, but I haven't sat down and worked out the details yet...
>
> The answer to which did not fit on the margins of your thesis. :)
>
> I too have been trying to come up with a better means of gang
> scheduling... for about 2 years now. In terms of bitmaps it looks a bit
> like QFQ, but honestly...

It's not the gang scheduling we need, deciding which devices to send to
at once is generally done in firmware anyway. We just need to be able to
dequeue packets for more than one station when possible. I don't think
we need the fancy bitmap stuff from QFQ since we don't have that many
stations to schedule at once; so we can probably live with O(log(n)) in
the number of active stations.

> Is there going to be some point where whatever we have here is
> significantly better than what we had? Or not significantly worse? Or
> handwavy enough to fix the rest once enlightenment arrives?
>
> The perfect is the enemy of the good.

Well, what we have now works for ath9k, works reasonably well for ath10k
in pull mode, not so well for ath10k in push mode, and then there's
Felix' comments in this thread...

> I'd rather like the intel folk to be weighing in on this stuff, too,
> trying to get an API right requires use cases.

Johannes has already reviewed a previous version, and I do believe he
said he'd review it again once we have converged on something :)

-Toke


More information about the Make-wifi-fast mailing list