[Make-wifi-fast] [PATCH v8 0/2] Implement Airtime-based Queue Limit (AQL)
Dave Taht
dave.taht at gmail.com
Wed Dec 18 10:04:59 EST 2019
On Fri, Dec 6, 2019 at 2:05 PM Kan Yan via Make-wifi-fast
<make-wifi-fast at lists.bufferbloat.net> wrote:
>
>
>
>
> ---------- Forwarded message ----------
> From: Kan Yan <kyan at google.com>
> To: Dave Taht <dave at taht.net>
> Cc: Johannes Berg <johannes at sipsolutions.net>, Kalle Valo <kvalo at codeaurora.org>, "Toke Høiland-Jørgensen" <toke at redhat.com>, Rajkumar Manoharan <rmanohar at codeaurora.org>, Kevin Hayes <kevinhayes at google.com>, Make-Wifi-fast <make-wifi-fast at lists.bufferbloat.net>, linux-wireless <linux-wireless at vger.kernel.org>, Yibo Zhao <yiboz at codeaurora.org>, John Crispin <john at phrozen.org>, Lorenzo Bianconi <lorenzo at kernel.org>, Felix Fietkau <nbd at nbd.name>
> Bcc:
> Date: Fri, 6 Dec 2019 14:04:44 -0800
> Subject: Re: [Make-wifi-fast] [PATCH v8 0/2] Implement Airtime-based Queue Limit (AQL)
> Dave Taht (taht.net) writes:
>
> > Judging from kan's (rather noisy) data set 10ms is a good default on
> > 5ghz. There is zero difference in throughput as near as I can tell.
> > It would be interesting to try 3ms (as there's up to 8ms of
> > buffering in the driver) to add to this dataset, helpful also
> > to be measuring the actual tcp rtt rather in addition to the fq behavior.
>
> One large aggregation in 11ac can last 4-5 ms, with bursting,
> firmware/hardware can complete as much 8 -10 ms worth of frames in one
> shot and then try to dequeue more frames, so the jitter for the
> sojourn time can be as high as 8-10 ms. Setting the default target to
> something less than 10ms can cause unnecessary packet drop in some
> occasions.
Depends on what you mean by "unnecessary packet drop". Most flows are
short and don't last
long enough to be below target in the interval. Much traffic is not
greedy in the first place. Big fat flows benefit from shorter queues
in multiple ways. the "jitter" here is smoothed out by the interval.
Retransmits go up (without ecn) but the amount of data in flight goes
down. Other flows can grab the link faster.
really, really, really - at 5ghz in most circumstances a 10ms target
is still leads to unnecessary delay and no loss in throughput in every
test I've ever tried. I'm gearing up to try and find cases where it
might be bad,
(on the ath10k and ax200 chips) but I tend to point to other factors
(like unbounded hardware retries) as the source of that problem. Any
suggestions as to where this would lead to "unnecessary packet loss"
that can be confirmed by testing are welcomed.
A core thought experiment - not that this happens IRL - is flooding
100 stations with 1 flow each. What "should" happen in this case? No
matter the "target" performance will inevitably decline to 1 packet
per txop. An alternative - reserving 4ms per station so as to preserve
bandwidth, would result in timeouts for a percentage of stations (this
was far, far worse prior to AQL on this chip, see the lwn preso, where
only 5 flows out of 100 could start due to cracking 2sec of delay)
so my vote would be - cut the default target on 5ghz to 10, and see if
anybody notices.
and secondly, make it configurable from userspace, somehow. I'd have a
parameter "autotune" as we try to find a righter value for a variety
of use cases (as I described below), but exposing the target and
interval finally would be good.
And tackle hw retries. It's a really, really long tail above the 95th
percentile (see attached)
>
>
> On Fri, Dec 6, 2019 at 11:53 AM Dave Taht <dave at taht.net> wrote:
> >
> > Johannes Berg <johannes at sipsolutions.net> writes:
> >
> > > On Wed, 2019-12-04 at 04:47 +0000, Kalle Valo wrote:
> > >>
> > >> > Overall, I think AQL and fq_codel works well, at least with ath10k.
> > >> > The current target value of 20 ms is a reasonable default.
> >
> > >> > It is
> > >> > relatively conservative that helps stations with weak signal to
> > >> > maintain stable throughput.
> >
> > This statement is overbroad and largely incorrect.
> >
> > >>> Although, a debugfs entry that allows
> > >> > runtime adjustment of target value could be useful.
> > >>
> > >> Why not make it configurable via nl80211? We should use debugfs only for
> > >> testing and debugging, not in production builds, and to me the use case
> > >> for this value sounds like more than just testing.
> >
> > I certainly lean towards making it configurable AND autotuning it
> > better.
> >
> > > On the other hand, what application/tool or even user would be able to
> > > set this correctly?
> >
> > The guideline from the theory ("Power") is the target should 5-10% of
> > the interval, and the interval fairly close to the most commonly
> > observed max RTT. I should try to stress (based on some statements made
> > here) - that you have to *consistently* exceed the target for the
> > interval, in order for codel to have any effect at all. Please try to
> > internalize that - the smoothing comes from the interval... 100ms is
> > quite a large interval....
> >
> > Judging from kan's (rather noisy) data set 10ms is a good default on
> > 5ghz. There is zero difference in throughput as near as I can tell.
> >
> > It would be interesting to try 3ms (as there's up to 8ms of
> > buffering in the driver) to add to this dataset, helpful also
> > to be measuring the actual tcp rtt rather in addition to the fq behavior.
> >
> > I see what looks like channel scan behavior in the data. (on the
> > client?) Running tests for 5 minutes will show the impact and frequency
> > of channel scans better.
> >
> > The 20ms figure we used initially was due to a variety of factors:
> >
> > * This was the first ever attempt at applying an AQM technology to wifi!!!
> > ** FIXED: http://blog.cerowrt.org/post/real_results/
> > * We were debugging the FQ component, primarily.
> > ** FIXED: http://blog.cerowrt.org/post/crypto_fq_bug/
> > * We were working on backports and on integrating a zillion other pieces
> > all in motion.
> > ** sorta FIXED. I know dang full well how many darn variables there
> > are, as well as how much the network stack has changed since the initial work.
> > * We were working on 2.4ghz which has a baseline rate of 1Mbit (13ms target)
> > Our rule of thumb is that min target needs to MTU*1.5. There was also a
> > a fudge factor to account for half duplex operation and the minimum
> > size of a txop.
> > ** FIXED: 5ghz has a baseline rate of 6mbits.
> > * We didn't have tools to look at tcp rtts at the time
> > ** FIXED: flent --socket-stats tcp_nup
> > * We had issues with power save
> > ** Everybody has issues with powersave...
> > ** These are still extant on many platforms, notably ones that wake up
> > and dump all their accumulated mcast data into the link. Not our problem.
> > * channel scans: http://blog.cerowrt.org/post/disabling_channel_scans/
> > ** Non background channel scans are very damaging. I am unsure from this
> > data if that's what we are seeing from the client? Or the ath10k?
> > the ability to do these in the background or notmight be a factor in
> > autotuning things better.
> > * We had MAJOR issues with TSQ
> > ** FIXED: https://lwn.net/Articles/757643/
> >
> > Honestly the TSQ interaction was the biggest barrier to figuring out
> > what was going wrong at the time we upstreamed this, and a tcp_nup test,
> > now, with TSQ closer to "right", AQL in place and the reduced target
> > should be interesting. I think the data we have now on TSQ vs wifi on
> > this chip, is now totally obsolete.
> >
> > * We had issues with mcast
> > ** I think we still have many issues with multicast but improving that
> > is a separate problem entirely.
> > * We ran out of time and money, and had hit it so far out of the park
> > ( https://lwn.net/Articles/705884/ )
> > that it seemed like sleeping more and tweaking things less was a win.
> >
> > Judging from the results we now get on 5ghz and on ac, it seems good to
> > reduce the target to 10ms (or less!) on 5ghz ghz, especially on ac,
> > which will result in a less path inflation and no loss in throughput.
> >
> > I have been running with a 6ms target for several years now on my
> > 802.11n 5ghz devices. (I advertise a 3ms rather than the default txop
> > size also) These are, admittedly, mostly used as backhaul
> > links (so I didn't have tsq, aql, rate changes, etc) , but seing a path
> > inflation of no more than 30ms under full bidirectional load is
> > nice. (and still 22ms worse than it could be in a more perfect world)
> >
> > Another thing I keep trying to stress: TCP's ability to grab more
> > bandwidth is quadratic relative the delay.
> >
> > >
> > > johannes
>
>
>
> ---------- Forwarded message ----------
> From: Kan Yan via Make-wifi-fast <make-wifi-fast at lists.bufferbloat.net>
> To: Dave Taht <dave at taht.net>
> Cc: Rajkumar Manoharan <rmanohar at codeaurora.org>, Kevin Hayes <kevinhayes at google.com>, Make-Wifi-fast <make-wifi-fast at lists.bufferbloat.net>, linux-wireless <linux-wireless at vger.kernel.org>, Yibo Zhao <yiboz at codeaurora.org>, John Crispin <john at phrozen.org>, Johannes Berg <johannes at sipsolutions.net>, Lorenzo Bianconi <lorenzo at kernel.org>, Kalle Valo <kvalo at codeaurora.org>, Felix Fietkau <nbd at nbd.name>
> Bcc:
> Date: Fri, 06 Dec 2019 14:05:00 -0800 (PST)
> Subject: Re: [Make-wifi-fast] [PATCH v8 0/2] Implement Airtime-based Queue Limit (AQL)
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
--
Make Music, Not War
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729
-------------- next part --------------
A non-text attachment was scrubbed...
Name: long_tail.png
Type: image/png
Size: 49625 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20191218/6ac43dc6/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: tcp_nup-2019-12-18T064519.785631.5ghz_flows_2.flent.gz
Type: application/gzip
Size: 792813 bytes
Desc: not available
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20191218/6ac43dc6/attachment-0001.gz>
More information about the Make-wifi-fast
mailing list