[Codel] [RFC] ath10k: implement dql for htt tx
Michal Kazior
michal.kazior at tieto.com
Tue Mar 29 03:49:13 EDT 2016
On 26 March 2016 at 17:44, Dave Taht <dave.taht at gmail.com> wrote:
> Dear Michal:
[...]
> I am running behind on this patch set, but a couple quick comments.
[...]
>> - no rrul tests, sorry Dave! :)
>
> rrul would be a good baseline to have, but no need to waste your time
> on running it every time as yet. It stresses out both sides of the
> link so whenever you get two devices with these driver changes on them
> it would be "interesting". It's the meanest, nastiest test we have...
> if you can get past the rrul, you've truly won.
>
> Consistently using tcp_fair_up with 1,2,4 flows and 1-4 stations as
> you are now is good enough.
>
> doing a more voip-like test with slamming d-itg into your test would be good...
>
>>
>> Observations / conclusions:
>> - DQL builds up throughput slowly on "veryfast"; in some tests it
>> doesn't get to reach peak (roughly 210mbps average) because the test
>> is too short
>
> It looks like having access to the rate control info here for the
> initial and ongoing estimates will react faster and better than dql
> can. I loved the potential here in getting full rate for web traffic
> in the usual 2second burst you get it in (see above blog entries)
On one hand - yes, rate control should in theory be "faster".
On the other hand DQL will react also to host system interrupt service
time. On slow CPUs (typically found on routers and such) you might end
up grinding the CPU so much you need deeper tx queues to keep the hw
busy (and therefore keep performance maxed). DQL should automatically
adjust to that while "txop limit" might not.
>> - DQL shows better latency results in almost all cases compared to
>> the txop based scheduling from my mac80211 RFC (but i haven't
>> thoroughly looked at *all* the data; I might've missed a case where it
>> performs worse)
>
> Well, if you are not saturating the link, latency will be better.
> Showing how much less latency is possible, is good too, but....
If you wait long enough DQL does scale the tx queue size for max
performance (you can see TCP throughput scale towards test end). It
does take a few seconds to reach the peak. Perhaps this could be
tweaked.
>> - latency improvement seen on sw/ath10k_dql @ rate6m,fast compared to
>> sw/base (1800ms -> 160ms) can be explained by the fact that txq AC
>> limit is 256 and since all TCP streams run on BE (and fq_codel as the
>> qdisc) the induced txq latency is 256 * (1500 / (6*1024*1024/8.)) / 4
>> = ~122ms which is pretty close to the test data (the formula ignores
>> MAC overhead, so the latency in practice is larger). Once you consider
>> the overhead and in-flight packets on driver-firmware tx queue 160ms
>> doesn't seem strange. Moreover when you compare the same case with
>> sw/fq+ath10k_dql you can clearly see the advantage of having fq_codel
>> in mac80211 software queuing - the latency drops by (another) order of
>> magnitude because now incomming ICMPs are treated as new, bursty flows
>> and get fed to the device quickly.
>
>
> It is always good to test codel and fq_codel separately, particularly
> on a new codel implementation. There are so many ways to get codel
> wrong or add an optimization that doesn't work (speaking as someone
> that has got it wrong often)
>
> If you are getting a fq result of 12 ms, that means you are getting
> data into the device with a ~12ms standing queue there. On a good day
> you'd see perhaps 17-22ms for "codel target 5ms" in that case, on the
> rtt_fair_up series of tests.
This will obviously depend on the number of stations you have data
queued to. Estimating codel target time requires smarter tx
scheduling. My earlier (RFC) patch tried doing that.
> if you are getting a pure codel result of 160ms, that means the
> implementation is broken. But I think (after having read your
> description twice), the baseline result today of 160ms of queuing was
> with a fq_codel *qdisc* doing the work on top of huge buffers,
Yes. The 160ms is with fq_codel qdisc with ath10k doing DQL at 6mbps.
Without DQL ath10k would clog up all tx slots (1424 of them) with
frames. At 6mbps you typically want/need a handful (5-10) of frames to
be queued.
> the
> results a few days ago were with a fq_codel 802.11 layer, and the
> results today you are comparing, are pure fq (no codel) in the 802.11e
> stack, with fixed (and dql) buffering?
Yes. codel target in fq_codel-in-mac80211 is hardcoded at 20ms now
because there's no scheduling and hence no data to derive the target
dynamically.
> if so. Yea! Science!
>
> ...
>
> One of the flaws of the flent tests is that conceptually they were
> developed before the fq stuff won so big, and looking hard at the
> per-queue latency for the fat flows requires either looking hard at
> the packet captures or sampling the actual queue length. There is that
> sampling capability in various flent tests, but at the moment it only
> samples what tc provides (Drops, marks, and length) and it does not
> look like there is a snapshot queue length exported from that ath10k
> driver?
Exporting tx queue length snapshot should be fairly easy. 2 debugfs
entries for ar->htt.max_num_pending_tx and ar->htt.num_pending_tx.
>
> ...
>
> As for a standing queue of 12ms at all in wifi... and making the fq
> portion work better, it would be quite nice to get that down a bit
> more. One thought (for testing purposes) would be to fix a txop at
> 1024,2048,3xxxus for some test runs. I really don't have a a feel for
> framing overhead on the latest standards. (I loathe the idea of
> holding the media for more than 2-3ms when you have other stuff coming
> in behind it...)
>
> Another is to hold off preparing and submitting a new batch of
> packets; when you know the existing TID will take 4ms to transmit,
> defer grabbing the next batch for 3ms. Etc.
I don't think hardcoding timings for tx scheduling is a good idea. I
believe we just need a deficit-based round robin with time slices. The
problem I see is time slices may change with host CPU load. That's why
I'm leaning towards more experiments with DQL approach.
> It would be glorious to see wifi capable of decent twitch gaming again...
>
>> - slow+fast case still sucks but that's expected because DQL hasn't
>> been applied per-station
>>
>> - sw/fq has lower peak throughput ("veryfast") compared to sw/base
>> (this actually proves current - and very young least to say - ath10k
>> wake-tx-queue implementation is deficient; ath10k_dql improves it and
>> sw/fq+ath10k_dql climbs up to the max throughput over time)
>>
>>
>> To sum things up:
>> - DQL might be able to replace the explicit txop queue limiting
>> (which requires rate control info)
>
> I am pessimistic. Perhaps as a fallback?
At first I was (too) considering DQL as a nice fallback but the more I
think about the more it makes sense to use it as the main source of
deriving time slices for tx scheduling.
>> - mac80211 fair queuing works
>
> :)
>
>>
>> A few plots for quick and easy reference:
>>
>> http://imgur.com/a/TnvbQ
>>
>>
>> Michał
>>
>> PS. I'm not feeling comfortable attaching 1MB attachment to a mailing
>> list. Is this okay or should I use something else next time?
>
> I/you can slam results into the github blogcerowrt repo and then pull
> out stuff selectively....
Good idea, thanks!
Michał
More information about the Codel
mailing list