From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f68.google.com (mail-ed1-f68.google.com [209.85.208.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 682073CB36 for ; Tue, 9 Apr 2019 16:41:09 -0400 (EDT) Received: by mail-ed1-f68.google.com with SMTP id w3so9850edu.10 for ; Tue, 09 Apr 2019 13:41:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version:content-transfer-encoding; bh=XPe7jI+WcE4ii926/vq4cq+lBNXK2hucWmRJ86xgeZk=; b=jzmQ/djmC0zOwqGa3LRW2DqwP4JXs5sW6Vu4szMKCLhSVs4wlN5EWrh9x26wXBTYnf A53WbpJQzogFXVQbvLi/nM9+p2IiVeEZkqlzbf64Z/VzaQ3Bif49nm+nuUGIxoypD6vB Pr2P7mGWi1TrkCd8kuuEW0A6xMVyqqn8erbtfPasszqDhLbhI3fn3gumJ7eFsbF9v9eL wtlFqBjlMgogiT8Ra66/Nsk5JH7QI85EBohLUO8SBJhiNdyHso7mlFMWchtHCIdaScIu qqMt063ADZhwCqPQdxnDa8p8f2VtUCeoYOoNxcqAN/6VhZPcMEx2/F5ape+GM9ks1+RG iapQ== X-Gm-Message-State: APjAAAXNEW4BjCD22zHdbQci9b9fFtsQCkWqCtcQobSfjXZ73Q7dIB2L VxAPfw4hgxGwG6kE+UDYM0OlWg== X-Google-Smtp-Source: APXvYqzRy5WEosfm03JGPl6V8NT73z3+etyqXt6ZzYa5vhxqbtYEY3JNB4RoUQUttNO38FVKvKY6xg== X-Received: by 2002:a05:6402:180c:: with SMTP id g12mr14336772edy.181.1554842468615; Tue, 09 Apr 2019 13:41:08 -0700 (PDT) Received: from alrua-x1.borgediget.toke.dk (borgediget.toke.dk. [85.204.121.218]) by smtp.gmail.com with ESMTPSA id b34sm9790443edd.24.2019.04.09.13.41.07 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 09 Apr 2019 13:41:07 -0700 (PDT) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id C87481804A4; Tue, 9 Apr 2019 22:41:06 +0200 (CEST) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= To: Yibo Zhao Cc: make-wifi-fast@lists.bufferbloat.net, linux-wireless@vger.kernel.org, Felix Fietkau , Rajkumar Manoharan , Kan Yan , linux-wireless-owner@vger.kernel.org In-Reply-To: References: <20190215170512.31512-1-toke@redhat.com> <753b328855b85f960ceaf974194a7506@codeaurora.org> <87ftqy41ea.fsf@toke.dk> X-Clacks-Overhead: GNU Terry Pratchett Date: Tue, 09 Apr 2019 22:41:06 +0200 Message-ID: <877ec2ykrh.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Make-wifi-fast] [RFC/RFT] mac80211: Switch to a virtual time-based airtime scheduler X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Apr 2019 20:41:09 -0000 Yibo Zhao writes: > On 2019-04-04 16:31, Toke H=C3=B8iland-J=C3=B8rgensen wrote: >> Yibo Zhao writes: >>=20 >>> On 2019-02-16 01:05, Toke H=C3=B8iland-J=C3=B8rgensen wrote: >>>> This switches the airtime scheduler in mac80211 to use a virtual >>>> time-based >>>> scheduler instead of the round-robin scheduler used before. This has=20 >>>> a >>>> couple of advantages: >>>>=20 >>>> - No need to sync up the round-robin scheduler in firmware/hardware >>>> with >>>> the round-robin airtime scheduler. >>>>=20 >>>> - If several stations are eligible for transmission we can schedule >>>> both of >>>> them; no need to hard-block the scheduling rotation until the head=20 >>>> of >>>> the >>>> queue has used up its quantum. >>>>=20 >>>> - The check of whether a station is eligible for transmission becomes >>>> simpler (in ieee80211_txq_may_transmit()). >>>>=20 >>>> The drawback is that scheduling becomes slightly more expensive, as=20 >>>> we >>>> need >>>> to maintain an rbtree of TXQs sorted by virtual time. This means that >>>> ieee80211_register_airtime() becomes O(logN) in the number of=20 >>>> currently >>>> scheduled TXQs. However, hopefully this number rarely grows too big >>>> (it's >>>> only TXQs currently backlogged, not all associated stations), so it >>>> shouldn't be too big of an issue. >>>>=20 >>>> @@ -1831,18 +1830,32 @@ void ieee80211_sta_register_airtime(struct >>>> ieee80211_sta *pubsta, u8 tid, >>>> { >>>> struct sta_info *sta =3D container_of(pubsta, struct sta_info, sta); >>>> struct ieee80211_local *local =3D sta->sdata->local; >>>> + struct ieee80211_txq *txq =3D sta->sta.txq[tid]; >>>> u8 ac =3D ieee80211_ac_from_tid(tid); >>>> - u32 airtime =3D 0; >>>> + u64 airtime =3D 0, weight_sum; >>>> + >>>> + if (!txq) >>>> + return; >>>>=20 >>>> if (sta->local->airtime_flags & AIRTIME_USE_TX) >>>> airtime +=3D tx_airtime; >>>> if (sta->local->airtime_flags & AIRTIME_USE_RX) >>>> airtime +=3D rx_airtime; >>>>=20 >>>> + /* Weights scale so the unit weight is 256 */ >>>> + airtime <<=3D 8; >>>> + >>>> spin_lock_bh(&local->active_txq_lock[ac]); >>>> + >>>> sta->airtime[ac].tx_airtime +=3D tx_airtime; >>>> sta->airtime[ac].rx_airtime +=3D rx_airtime; >>>> - sta->airtime[ac].deficit -=3D airtime; >>>> + >>>> + weight_sum =3D local->airtime_weight_sum[ac] ?: sta->airtime_weight; >>>> + >>>> + local->airtime_v_t[ac] +=3D airtime / weight_sum; >>> Hi Toke, >>>=20 >>> Please ignore the previous two broken emails regarding this new=20 >>> proposal >>> from me. >>>=20 >>> It looks like local->airtime_v_t acts like a Tx criteria. Only the >>> stations with less airtime than that are valid for Tx. That means=20 >>> there >>> are situations, like 50 clients, that some of the stations can be used >>> to Tx when putting next_txq in the loop. Am I right? >>=20 >> I'm not sure what you mean here. Are you referring to the case where=20 >> new >> stations appear with a very low (zero) airtime_v_t? That is handled=20 >> when >> the station is enqueued. > Hi Toke, > > Sorry for the confusion. I am not referring to the case that you=20 > mentioned though it can be solved by your subtle design, max(local vt,=20 > sta vt). :-) > > Actually, my concern is situation about putting next_txq in the loop.=20 > Let me explain a little more and see below. > >> @@ -3640,126 +3638,191 @@ EXPORT_SYMBOL(ieee80211_tx_dequeue); >> struct ieee80211_txq *ieee80211_next_txq(struct ieee80211_hw *hw, u8 >> ac) >> { >> struct ieee80211_local *local =3D hw_to_local(hw); >> + struct rb_node *node =3D local->schedule_pos[ac]; >> struct txq_info *txqi =3D NULL; >> + bool first =3D false; >>=20 >> lockdep_assert_held(&local->active_txq_lock[ac]); >>=20 >> - begin: >> - txqi =3D list_first_entry_or_null(&local->active_txqs[ac], >> - struct txq_info, >> - schedule_order); >> - if (!txqi) >> + if (!node) { >> + node =3D rb_first_cached(&local->active_txqs[ac]); >> + first =3D true; >> + } else >> + node =3D rb_next(node); > > Consider below piece of code from ath10k_mac_schedule_txq: > > ieee80211_txq_schedule_start(hw, ac); > while ((txq =3D ieee80211_next_txq(hw, ac))) { > while (ath10k_mac_tx_can_push(hw, txq)) { > ret =3D ath10k_mac_tx_push_txq(hw, txq); > if (ret < 0) > break; > } > ieee80211_return_txq(hw, txq); > ath10k_htt_tx_txq_update(hw, txq); > if (ret =3D=3D -EBUSY) > break; > } > ieee80211_txq_schedule_end(hw, ac); > > If my understanding is right, local->schedule_pos is used to record the=20 > last scheduled node and used for traversal rbtree for valid txq. There=20 > is chance that an empty txq is feeded to return_txq and got removed from= =20 > rbtree. The empty txq will always be the rb_first node. Then in the=20 > following next_txq, local->schedule_pos becomes meaningless since its=20 > rb_next will return NULL and the loop break. Only rb_first get dequeued=20 > during this loop. > > if (!node || RB_EMPTY_NODE(node)) { > node =3D rb_first_cached(&local->active_txqs[ac]); > first =3D true; > } else > node =3D rb_next(node); Ah, I see what you mean. Yes, that would indeed be a problem - nice catch! :) > How about this? The nodes on the rbtree will be dequeued and removed > from rbtree one by one until HW is busy. Please note local vt and sta > vt will not be updated since txq lock is held during this time. Insertion and removal from the rbtree are relatively expensive, so I'd rather not do that for every txq. I think a better way to solve this is to just defer the actual removal from the tree until ieee80211_txq_schedule_end()... Will fix that when I submit this again. -Toke