[Make-wifi-fast] fq_codel_drop vs a udp flood
leroi.lists at gmail.com
Mon May 2 09:47:40 EDT 2016
On 1 May 2016 at 06:41, Dave Taht <dave.taht at gmail.com> wrote:
> There were a few things on this thread that went by, and I wasn't on
> the ath10k list
> first up, udp flood...
>>>> From: ath10k <ath10k-boun... at lists.infradead.org> on behalf of Roman
>>>> Yeryomin <leroi.li... at gmail.com>
>>>> Sent: Friday, April 8, 2016 8:14 PM
>>>> To: ath10k at lists.infradead.org
>>>> Subject: ath10k performance, master branch from 20160407
>>>> I've seen performance patches were commited so I've decided to give it
>>>> a try (using 4.1 kernel and backports).
>>>> The results are quite disappointing: TCP download (client pov) dropped
>>>> from 750Mbps to ~550 and UDP shows completely weird behavour - if
>>>> generating 900Mbps it gives 30Mbps max, if generating 300Mbps it gives
>>>> 250Mbps, before (latest official backports release from January) I was
>>>> able to get 900Mbps.
>>>> Hardware is basically ap152 + qca988x 3x3.
>>>> When running perf top I see that fq_codel_drop eats a lot of cpu.
>>>> Here is the output when running iperf3 UDP test:
>>>> 45.78% [kernel] [k] fq_codel_drop
>>>> 3.05% [kernel] [k] ag71xx_poll
>>>> 2.18% [kernel] [k] skb_release_data
>>>> 2.01% [kernel] [k] r4k_dma_cache_inv
> The udp flood behavior is not "weird". The test is wrong. It is so filling
> the local queue as to dramatically exceed the bandwidth on the link.
Are you trying to say that generating 250Mbps and having 250Mbps an
generating, e.g. 700Mbps and having 30Mbps is normal and I should
blame iperf3? Even if before I could get 900Mbps with the same
> The size of the local queue has exceeded anything rational, gentle
> tcp-friendly methods have failed, we're out of configured queue space,
> and as a last ditch move, fq_codel_drop is attempting to reduce the
> backlog via brute force.
So it looks to me that fq_codel is just broken if it needs half of my resources.
> 0) Fix the test
> The udp flood test should seek an operating point roughly equal to
> the bandwidth of the link, to where there is near zero queuing delay,
> and nearly 100% utilization.
> There are several well known methods for an endpoint to seek
> equilibrium, - filling the pipe and not the queue - notably the ones
> outlined in this:
> are a good starting point for further research. :)
> Now, a unicast flood test is useful for figuring out how many packets
> can fit in a link (both large and small), and tweaking the cpu (or
> running a box out of memory).
> However -
> I have seen a lot of udp flood tests that are constructed badly.
> Measuring time to *send* X packets without counting the queue length
> in the test is one. This was iperf3 what options, exactly? Running
> locally or via a test client connected via ethernet? (so at local cpu
> speeds, rather than the network ingress speed?)
iperf3 -c <server_ip> -u -b900M -l1472 -R -t600
server_ip is on ethernet side, no NAT, minimal system, client is 3x3 MacBook Pro
More information about the Make-wifi-fast