From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x241.google.com (mail-oi0-x241.google.com [IPv6:2607:f8b0:4003:c06::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id F2D473B260; Mon, 2 May 2016 09:47:41 -0400 (EDT) Received: by mail-oi0-x241.google.com with SMTP id f63so27228786oig.0; Mon, 02 May 2016 06:47:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=tdf0/iapJ8NaGVcQb5uGFOTObIEdWit6rn66Y7G5FqE=; b=orgdXjjcnK56QSwyJrweTn78l8YY0x3j3mv/Z/7E7yY1cs1DpIoRBZYozy499EPvY9 w0kiQhrwyBquPvs7OX1G6T94fmwsT6Hyzgpk45EZvKYnwvwwB1GujQCCiyK3+LNV2gal 1xBLdLceOfo/o/n+Wdjhj2UjEfldajDbG8ao3kXvygBNJxrGhP13b0I2QlA7qeNooWml DO8JoK3TWdlPfnC5Bgrl4IWnb8f0UUQMxqHLEojDNLHHJoX5gh40nH2Lm/fWcglibnym dPrcyjDMZeM9UR5RQ2Bl5IBsBwNHEeuqtWQEmRAr8dTnnF71uIlg6LJpjcDEcf84f054 USKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=tdf0/iapJ8NaGVcQb5uGFOTObIEdWit6rn66Y7G5FqE=; b=LNspXqJldhVJX4z/5Rto7XwXt9bSD7qiDHv+MsjkiiXF8kPxY4oNbD3fGbzTTl3mu1 CUNFpWPN+SEMNdHFCs0IzmmwNI2RmvdEd5FEIAz0MqD7YI83fb17Bx/NAeCP8dhmJQ7i yxGyx6yEb3QXWB0W1E4l2iJv8wDhgp/I5yV5hahIeUS+nNUB5pzsBh4nL5pjbq0AWnk3 gTT05tTh9+pOJDiIEwFnDi7ovCXtt4eDHWrCvfrU3WdgBLQjUjHy/QwprCUj9ZZy281Y o4NJX1V78lvRgH6kYtyYiBM+5Of1/qZIUWyH7ZjAa7pFJngT2GwmsZQadYP53eNaIMqo yR9A== X-Gm-Message-State: AOPr4FXmZ2MHIcJvh9m1Sj3KueC+zP6GYHxK595kEExB6BFy+07B/fBwAXQIxxtH2/jZIptMVYpgrPe+1D1z2g== MIME-Version: 1.0 X-Received: by 10.157.47.103 with SMTP id h94mr15577942otb.76.1462196861209; Mon, 02 May 2016 06:47:41 -0700 (PDT) Received: by 10.202.79.195 with HTTP; Mon, 2 May 2016 06:47:40 -0700 (PDT) In-Reply-To: References: Date: Mon, 2 May 2016 16:47:40 +0300 Message-ID: From: Roman Yeryomin To: Dave Taht Cc: ath10k , "codel@lists.bufferbloat.net" , make-wifi-fast@lists.bufferbloat.net Content-Type: text/plain; charset=UTF-8 Subject: Re: [Make-wifi-fast] fq_codel_drop vs a udp flood X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 May 2016 13:47:42 -0000 On 1 May 2016 at 06:41, Dave Taht wrote: > There were a few things on this thread that went by, and I wasn't on > the ath10k list > > (https://www.mail-archive.com/ath10k@lists.infradead.org/msg04461.html) > > first up, udp flood... > >>>> From: ath10k on behalf of Roman >>>> Yeryomin >>>> Sent: Friday, April 8, 2016 8:14 PM >>>> To: ath10k@lists.infradead.org >>>> Subject: ath10k performance, master branch from 20160407 >>>> >>>> Hello! >>>> >>>> I've seen performance patches were commited so I've decided to give it >>>> a try (using 4.1 kernel and backports). >>>> The results are quite disappointing: TCP download (client pov) dropped >>>> from 750Mbps to ~550 and UDP shows completely weird behavour - if >>>> generating 900Mbps it gives 30Mbps max, if generating 300Mbps it gives >>>> 250Mbps, before (latest official backports release from January) I was >>>> able to get 900Mbps. >>>> Hardware is basically ap152 + qca988x 3x3. >>>> When running perf top I see that fq_codel_drop eats a lot of cpu. >>>> Here is the output when running iperf3 UDP test: >>>> >>>> 45.78% [kernel] [k] fq_codel_drop >>>> 3.05% [kernel] [k] ag71xx_poll >>>> 2.18% [kernel] [k] skb_release_data >>>> 2.01% [kernel] [k] r4k_dma_cache_inv > > The udp flood behavior is not "weird". The test is wrong. It is so filling > the local queue as to dramatically exceed the bandwidth on the link. Are you trying to say that generating 250Mbps and having 250Mbps an generating, e.g. 700Mbps and having 30Mbps is normal and I should blame iperf3? Even if before I could get 900Mbps with the same tools/parameters/hw? Really? > The size of the local queue has exceeded anything rational, gentle > tcp-friendly methods have failed, we're out of configured queue space, > and as a last ditch move, fq_codel_drop is attempting to reduce the > backlog via brute force. So it looks to me that fq_codel is just broken if it needs half of my resources. > Approaches: > > 0) Fix the test > > The udp flood test should seek an operating point roughly equal to > the bandwidth of the link, to where there is near zero queuing delay, > and nearly 100% utilization. > > There are several well known methods for an endpoint to seek > equilibrium, - filling the pipe and not the queue - notably the ones > outlined in this: > > http://ee.lbl.gov/papers/congavoid.pdf > > are a good starting point for further research. :) > > Now, a unicast flood test is useful for figuring out how many packets > can fit in a link (both large and small), and tweaking the cpu (or > running a box out of memory). > > However - > > I have seen a lot of udp flood tests that are constructed badly. > > Measuring time to *send* X packets without counting the queue length > in the test is one. This was iperf3 what options, exactly? Running > locally or via a test client connected via ethernet? (so at local cpu > speeds, rather than the network ingress speed?) iperf3 -c -u -b900M -l1472 -R -t600 server_ip is on ethernet side, no NAT, minimal system, client is 3x3 MacBook Pro Regards, Roman