[Make-wifi-fast] TCP performance regression in mac80211 triggered by the fq code
dave.taht at gmail.com
Wed Jul 13 03:57:47 EDT 2016
On Tue, Jul 12, 2016 at 4:02 PM, Dave Taht <dave.taht at gmail.com> wrote:
> On Tue, Jul 12, 2016 at 3:21 PM, Felix Fietkau <nbd at nbd.name> wrote:
>> On 2016-07-12 14:13, Dave Taht wrote:
>>> On Tue, Jul 12, 2016 at 12:09 PM, Felix Fietkau <nbd at nbd.name> wrote:
>>>> With Toke's ath9k txq patch I've noticed a pretty nasty performance
>>>> regression when running local iperf on an AP (running the txq stuff) to
>>>> a wireless client.
>>> Your kernel? cpu architecture?
>> QCA9558, 720 MHz, running Linux 4.4.14
So this is a single core at the near-bottom end of the range. I guess
we also should find a MIPS 24c derivative that runs at 400Mhz or so.
What HZ? (I no longer know how much higher HZ settings make any
difference, but I'm usually at NOHZ and 250, rather than 100.)
And all the testing to date was on much higher end multi-cores.
>>> What happens when going through the AP to a server from the wireless client?
>> Will test that next.
>>> Which direction?
>> AP->STA, iperf running on the AP. Client is a regular MacBook Pro
> There are always 2 wifi chips in play. Like the Sith.
>>>> Here's some things that I found:
>>>> - when I use only one TCP stream I get around 90-110 Mbit/s
>>> with how much cpu left over?
>>>> - when running multiple TCP streams, I get only 35-40 Mbit/s total
>>> with how much cpu left over?
To me this implies a contending lock issue, too much work in the irq
handler or too delayed work in the softirq handler....
I thought you were very brave to try and backport this.
> Care to try netperf?
>>> context switch difference between the two tests?
>> What's the easiest way to track that?
> if you have gnu "time" time -v the_process
> perf record -e context-switches -ag
> or: process /proc/$PID/status for cntx
>>> tcp_limit_output_bytes is?
> I keep hoping to be able to reduce this to something saner like 4096
> one day. It got bumped to 64k based on bad wifi performance once, and
> then to it's current size to make the Xen folk happier.
> The other param I'd like to see fiddled with is tcp_notsent_lowat.
> In both cases reductions will increase your context switches but
> reduce memory pressure and lead to a more reactive tcp.
> And in neither case I think this is the real cause of this problem.
>>> got perf?
>> Need to make a new build for that.
>>>> - fairness between TCP streams looks completely fine
>>> A codel will get to long term fairness pretty fast. Packet captures
>>> from a fq will show much more regular interleaving of packets,
>>>> - there's no big queue buildup, the code never actually drops any packets
>>> A "trick" I have been using to observe codel behavior has been to
>>> enable ecn on server and client, then checking in wireshark for ect(3)
>>> marked packets.
>> I verified this with printk. The same issue already appears if I have
>> just the fq patch (with the codel patch reverted).
> OK. A four flow test "should" trigger codel....
> Running out of cpu (or hitting some other bottleneck), without
> loss/marking "should" result in a tcptrace -G and xplot.org of the
> packet capture showing the window continuing to increase....
>>>> - if I put a hack in the fq code to force the hash to a constant value
>>> You could also set "flows" to 1 to keep the hash being generated, but
>>> not actually use it.
>>>> (effectively disabling fq without disabling codel), the problem
>>>> disappears and even multiple streams get proper performance.
>>> Meaning you get 90-110Mbits ?
>>> Do you have a "before toke" figure for this platform?
>> It's quite similar.
>>>> Please let me know if you have any ideas.
>>> I am in berlin, packing hardware...
>> - Felix
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
Let's go make home routers and wifi faster! With better software!
More information about the Make-wifi-fast