From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x244.google.com (mail-oi0-x244.google.com [IPv6:2607:f8b0:4003:c06::244]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 4E2563B25E; Mon, 16 May 2016 04:14:45 -0400 (EDT) Received: by mail-oi0-x244.google.com with SMTP id d139so27838631oig.1; Mon, 16 May 2016 01:14:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=zEeFdiI/Cdsgip4/CFohIec2+YZ+1lf7XP6F168xhrw=; b=XxKNS7uUNLEnj5YhSed8ujmR4+2ii6rI7v8koFdsMzbW+g9OGHhwsRe0NBvjA0sOVV 8Iqg9N3pFM/Yn0OFL2BRsSJheUruxNGkKpvUTrIdNKtBVZKVV0s2hOs2tEX+ZGGVvXq3 c6DBcxm7HokfgPebjjdtU+DZlsQdlUYNRnyKr/yFgyenhcwEmubRpW/joePa1dUk1nKD dEOC10Y4VzYKkom6OjBuRi3AIiBTj664e9uhJUc2x/T/g2SUsWkLGNr2aJyEidBv/Fjf 3fJP01QPbCc6KW3aXWg4trjL634NVm8+wLlOrFYOrxXREzGm3zkkGFWnDkhpqoxLHLKI YIHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=zEeFdiI/Cdsgip4/CFohIec2+YZ+1lf7XP6F168xhrw=; b=k3R6wHH4NxkXiekOFu7pn1/mizg5lGcxUrnszbju9NNocaVAJ1aHzHrYhbUEUKWp0z 7X3/D1XHBKSYcv84/eUF9+3LE1k2+OeWs5F48yaYCXm4xMgtJ3KJOiImeklTx1k2wPZk Tt4zZYuz0ZXeKNXrvFTqWu1cmNK90zTisZlN1N5S6Ig6tGlCr/V5e2ljdZ8/sYKJHgnv x25VWJpr5L2HSiOzpd2H4RFqTJF7xQMCLRE5BtABVxMVI3hWeplZ0K2M5G4BjbPI6Ha/ R4HNsJOmta7DHhbtfAvefc485ei9Jx46jICrWyHdCcNtE7PI5NfyKn9/Kp7YonZdyUk/ Ujwg== X-Gm-Message-State: AOPr4FWL1ytyHvLp6AWVY5Dql0xMnh+nbJBajRvlTC2h7l5YCDQWSC6CMdnAK7Jdi6qMaEhWwY6lQqvUsR527Q== MIME-Version: 1.0 X-Received: by 10.157.7.5 with SMTP id 5mr11379682ote.168.1463386484693; Mon, 16 May 2016 01:14:44 -0700 (PDT) Received: by 10.202.252.9 with HTTP; Mon, 16 May 2016 01:14:44 -0700 (PDT) In-Reply-To: References: <1462125592.5535.194.camel@edumazet-glaptop3.roam.corp.google.com> <865DA393-262D-40B6-A9D3-1B978CD5F6C6@gmail.com> <1462128385.5535.200.camel@edumazet-glaptop3.roam.corp.google.com> <1462136140.5535.219.camel@edumazet-glaptop3.roam.corp.google.com> <1462201620.5535.250.camel@edumazet-glaptop3.roam.corp.google.com> <1462205669.5535.254.camel@edumazet-glaptop3.roam.corp.google.com> <1462464776.13075.18.camel@edumazet-glaptop3.roam.corp.google.com> <1462476207.13075.20.camel@edumazet-glaptop3.roam.corp.google.com> <20160506114243.4eb4f95e@redhat.com> <20160506144740.210901f5@redhat.com> Date: Mon, 16 May 2016 11:14:44 +0300 Message-ID: From: Roman Yeryomin To: Rajkumar Manoharan , Michal Kazior Cc: Jesper Dangaard Brouer , Felix Fietkau , Jonathan Morton , "codel@lists.bufferbloat.net" , ath10k , make-wifi-fast@lists.bufferbloat.net, =?UTF-8?B?UmFmYcWCIE1pxYJlY2tp?= , "netdev@vger.kernel.org" , OpenWrt Development List , Dave Taht Content-Type: text/plain; charset=UTF-8 Subject: Re: [Make-wifi-fast] OpenWRT wrong adjustment of fq_codel defaults (Was: [Codel] fq_codel_drop vs a udp flood) X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 May 2016 08:14:45 -0000 On 16 May 2016 at 01:34, Roman Yeryomin wrote: > On 6 May 2016 at 22:43, Dave Taht wrote: >> On Fri, May 6, 2016 at 11:56 AM, Roman Yeryomin wrote: >>> On 6 May 2016 at 21:43, Roman Yeryomin wrote: >>>> On 6 May 2016 at 15:47, Jesper Dangaard Brouer wrote: >>>>> >>>>> I've created a OpenWRT ticket[1] on this issue, as it seems that someone[2] >>>>> closed Felix'es OpenWRT email account (bad choice! emails bouncing). >>>>> Sounds like OpenWRT and the LEDE https://www.lede-project.org/ project >>>>> is in some kind of conflict. >>>>> >>>>> OpenWRT ticket [1] https://dev.openwrt.org/ticket/22349 >>>>> >>>>> [2] http://thread.gmane.org/gmane.comp.embedded.openwrt.devel/40298/focus=40335 >>>> >>>> OK, so, after porting the patch to 4.1 openwrt kernel and playing a >>>> bit with fq_codel limits I was able to get 420Mbps UDP like this: >>>> tc qdisc replace dev wlan0 parent :1 fq_codel flows 16 limit 256 >>> >>> Forgot to mention, I've reduced drop_batch_size down to 32 >> >> 0) Not clear to me if that's the right line, there are 4 wifi queues, >> and the third one >> is the BE queue. > > That was an example, sorry, should have stated that. I've applied same > settings to all 4 queues. > >> That is too low a limit, also, for normal use. And: >> for the purpose of this particular UDP test, flows 16 is ok, but not >> ideal. > > I played with different combinations, it doesn't make any > (significant) difference: 20-30Mbps, not more. > What numbers would you propose? > >> 1) What's the tcp number (with a simultaneous ping) with this latest patchset? >> (I care about tcp performance a lot more than udp floods - surviving a >> udp flood yes, performance, no) > > During the test (both TCP and UDP) it's roughly 5ms in average, not > running tests ~2ms. Actually I'm now wondering if target is working at > all, because I had same result with target 80ms.. > So, yes, latency is good, but performance is poor. > >> before/after? >> >> tc -s qdisc show dev wlan0 during/after results? > > during the test: > > qdisc mq 0: root > Sent 1600496000 bytes 1057194 pkt (dropped 1421568, overlimits 0 requeues 17) > backlog 1545794b 1021p requeues 17 > qdisc fq_codel 8001: parent :1 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc fq_codel 8002: parent :2 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc fq_codel 8003: parent :3 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 1601271168 bytes 1057706 pkt (dropped 1422304, overlimits 0 requeues 17) > backlog 1541252b 1018p requeues 17 > maxpacket 1514 drop_overlimit 1422304 new_flow_count 35 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc fq_codel 8004: parent :4 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > > > after the test (60sec): > > qdisc mq 0: root > Sent 3084996052 bytes 2037744 pkt (dropped 2770176, overlimits 0 requeues 28) > backlog 0b 0p requeues 28 > qdisc fq_codel 8001: parent :1 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc fq_codel 8002: parent :2 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc fq_codel 8003: parent :3 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 3084996052 bytes 2037744 pkt (dropped 2770176, overlimits 0 requeues 28) > backlog 0b 0p requeues 28 > maxpacket 1514 drop_overlimit 2770176 new_flow_count 64 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc fq_codel 8004: parent :4 limit 1024p flows 16 quantum 1514 > target 80.0ms ce_threshold 32us interval 100.0ms ecn > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > > >> IF you are doing builds for the archer c7v2, I can join in on this... (?) > > I'm not but I have c7 somewhere, so I can do a build for it and also > test, so we are on the same page. > >> I did do a test of the ath10k "before", fq_codel *never engaged*, and >> tcp induced latencies under load, e at 100mbit, cracked 600ms, while >> staying flat (20ms) at 100mbit. (not the same patches you are testing) >> on x86. I have got tcp 300Mbit out of an osx box, similar latency, >> have yet to get anything more on anything I currently have >> before/after patchsets. >> >> I'll go add flooding to the tests, I just finished a series comparing >> two different speed stations and life was good on that. >> >> "before" - fq_codel never engages, we see seconds of latency under load. >> >> root@apu2:~# tc -s qdisc show dev wlp4s0 >> qdisc mq 0: root >> Sent 8570563893 bytes 6326983 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 >> target 5.0ms interval 100.0ms ecn >> Sent 2262 bytes 17 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 >> new_flows_len 0 old_flows_len 0 >> qdisc fq_codel 0: parent :2 limit 10240p flows 1024 quantum 1514 >> target 5.0ms interval 100.0ms ecn >> Sent 220486569 bytes 152058 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> maxpacket 18168 drop_overlimit 0 new_flow_count 1 ecn_mark 0 >> new_flows_len 0 old_flows_len 1 >> qdisc fq_codel 0: parent :3 limit 10240p flows 1024 quantum 1514 >> target 5.0ms interval 100.0ms ecn >> Sent 8340546509 bytes 6163431 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> maxpacket 68130 drop_overlimit 0 new_flow_count 120050 ecn_mark 0 >> new_flows_len 1 old_flows_len 3 >> qdisc fq_codel 0: parent :4 limit 10240p flows 1024 quantum 1514 >> target 5.0ms interval 100.0ms ecn >> Sent 9528553 bytes 11477 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> maxpacket 66 drop_overlimit 0 new_flow_count 1 ecn_mark 0 >> new_flows_len 1 old_flows_len 0 >> ``` >> >> >>>> This is certainly better than 30Mbps but still more than two times >>>> less than before (900). >> >> The number that I still am not sure we got is that you were sending >> 900mbit udp and recieving 900mbit on the prior tests? > > 900 was sending, AP POV (wifi client is downloading) > >>>> TCP also improved a little (550 to ~590). >> >> The limit is probably a bit low, also. You might want to try target >> 20ms as well. > > I've tried limit up to 1024 and target up to 80ms > >>>> >>>> Felix, others, do you want to see the ported patch, maybe I did something wrong? >>>> Doesn't look like it will save ath10k from performance regression. >> >> what was tcp "before"? (I'm sorry, such a long thread) > > 750Mbps Michal, after retesting with your patch (sorry, it was late yesterday, confused compat-wireless archives) I saw the difference. So the progress looks like this (all with fq_codel flows 16 limit 1024 target 20ms): no patches: 380Mbps UDP, 550 TCP Eric's (fq_codel drop) patch: 420Mbps UDP, 590 TCP (+40Mbps), latency 5-6ms during test Michal's (improve tx scheduling) patch: 580Mbps UDP, 660 TCP, latency up to 30-40ms during test after Rajkumar's proposal to "try without registering wake_tx_queue callback": 820Mbps UDP, 690 TCP. So, very close to "as before": 900Mbps UDP, 750 TCP. But still, I was expecting performance improvements from latest ath10k code, not regressions. I know that hw is capable of 800Mbps TCP, which I'm targeting. Regards, Roman p.s. sorry for confusion