On 2018-09-26 02:22, Toke Høiland-Jørgensen wrote: > Rajkumar Manoharan writes: > >> On 2018-09-16 10:42, Toke Høiland-Jørgensen wrote: >>> + if (ret) { >>> + clear_bit(IEEE80211_TXQ_AIRTIME_THROTTLE, &txqi->flags); >>> + list_del_init(&txqi->schedule_order); >>> + } else >>> + set_bit(IEEE80211_TXQ_AIRTIME_THROTTLE, &txqi->flags); >>> + >>> >> This looks wrong to me. txqi->flags are protected by fq->lock but here >> it is by active_txq_lock. no? > > Both set_bit() and clear_bit() are atomic operations, so they don't > need > separate locking. See Documentation/atomic_bitops.txt > :( Yeah... I got confused with attached soft lockup in ARM platform. >>> @@ -3677,6 +3751,7 @@ void ieee80211_txq_schedule_end(struct >>> ieee80211_hw *hw, u8 ac) >>> struct ieee80211_local *local = hw_to_local(hw); >>> >>> spin_unlock_bh(&local->active_txq_lock[ac]); >>> + tasklet_schedule(&local->wake_txqs_tasklet); >>> } >>> >> It is an overload to schedule wake_txqs_tasklet for each txq unlock. >> Instead I would prefer to call __ieee80211_kick_airtime from >> schedule_end. >> Thoughts? > > Yeah, I realise scheduling the whole wake_txqs_tasklet is maybe a bit > heavy-handed here. However just calling into __ieee80211_kick_airtime() > means we'll end up recursing back to the same place, which is not good > either (we could in theory flip-flop between two queues until we run > out > of stack space). > IMHO schedule_start/end is needed for tracking first node to break the loop. It is not applicable when the driver calls may_transmit(). It would be better if active_txq_lock is moved within may_transmit. > My "backup plan" if the wake_txqs_tasklet turns out to be too heavy was > to define a new tasklet just for this use; but wanted to see if this > actually turned out to be a problem first... > Instead of adding new tasklet, we can introduce new API to iterate through throttled txqs. Driver that make use of may_transmit, have to call this new API at the end of irq handler i.e after processing tx/rx. -Rajkumar