From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-x236.google.com (mail-ob0-x236.google.com [IPv6:2607:f8b0:4003:c01::236]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 22B8021F1B6; Wed, 1 Apr 2015 08:13:16 -0700 (PDT) Received: by obvd1 with SMTP id d1so84148817obv.0; Wed, 01 Apr 2015 08:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=G9ZuQWzPm7DgBpN9mI2azBjT8MrOj5Ea7oajbhU9Rf8=; b=AMR/039Dn0sfqevgCGa90m/MeHpvWJXdI3xita2rsxS1jnX/y6vIGjinL6tAOTEAbd h/aPEVFulyIjlUyvdPy6/OtkwqQDmjsiqvWZxKEdT0TBRI5S3GZY7tVSPFRxQrAAa7rp DScS0WSBqWazdqweZYMC1oH0q0FoIQwBtzvoH4/oS2GWS8fP3/MiSjZqt4DAuNEJePD2 KfhhUm+r7KQu4C09w2HZsICteATPOV4C7dJjjVdNgFbut8i0X55hfvr+3ok2ryLf0SVV DhVxkU3BynIJ+4sS9WQlWreX7/3giu1WhhoJ+9ct00Ov62hNiuw0tEBd+rDrtyBoym/Q FgZg== MIME-Version: 1.0 X-Received: by 10.182.230.132 with SMTP id sy4mr41298883obc.29.1427901180311; Wed, 01 Apr 2015 08:13:00 -0700 (PDT) Received: by 10.202.51.66 with HTTP; Wed, 1 Apr 2015 08:13:00 -0700 (PDT) In-Reply-To: <551AED92.2030105@candelatech.com> References: <55147C8A.4030804@candelatech.com> <55157250.6030208@gmail.com> <5515A8DF.8050902@candelatech.com> <551AED92.2030105@candelatech.com> Date: Wed, 1 Apr 2015 08:13:00 -0700 Message-ID: From: Dave Taht To: Isaac Konikoff Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: codel , bloat , cerowrt-devel , Richard Smith Subject: Re: [Bloat] [Cerowrt-devel] capturing packets and applying qdiscs X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Apr 2015 15:13:45 -0000 Dear Isaac: The core part you missed here: is for the APs only, in that you want a per-dst filter in place for those to improve aggregation in addition to the increased quantum. I note that you are now... in a few short weeks - ahead of me on testing wifi! I had put down this line of inquiry figuring that we=C2=B4d get around to native per station queues in the driver a year ago and focused on fixing the uplinks (and raising money) anyway, I just dug up what looks to be the right filter, but I could use a sanity test on it, because try as I might I can=C2=B4t seem to get any statistics back from tc -s filter (example) tc -s filter show dev gw10 parent 802: filter protocol all pref 97 flow filter protocol all pref 97 flow handle 0x3 hash keys dst divisor 1024 baseclass 802:1 which may mean it is not being installed right (maybe needs to be attached to 1:1,2,3,4), and I am away from my lab til thursday, with my darn test driver box stuck behind nat..... (if someone could run a few rrul and rtt_fair tests through cerowrt with this filter in place or not against a locally fast host, that would be great) I stuck the hacky wifi-ap-only-script up at: http://snapon.lab.bufferbloat.net/~d/debloat_ap.sh (usage: IFACE=3Dwhatever ./debloat_ap.sh) or, more simply #!/bin/sh IFACE=3Dvap1 QDISC=3Dfq_codel FQ_OPTS=3D"quantum 4542 target 30ms interval 300ms" # 1514*3 wifi() { tc qdisc add dev $IFACE handle 1 root mq tc qdisc add dev $IFACE parent 1:1 handle 801 $QDISC $FQ_OPTS noecn tc qdisc add dev $IFACE parent 1:2 handle 802 $QDISC $FQ_OPTS tc qdisc add dev $IFACE parent 1:3 handle 803 $QDISC $FQ_OPTS tc qdisc add dev $IFACE parent 1:4 handle 804 $QDISC $FQ_OPTS noecn # switch to a per dest figure tc filter add dev $IFACE parent 801: handle 3 protocol all prio 97 flow hash keys dst divisor 1024 tc filter add dev $IFACE parent 802: handle 3 protocol all prio 97 flow hash keys dst divisor 1024 tc filter add dev $IFACE parent 803: handle 3 protocol all prio 97 flow hash keys dst divisor 1024 tc filter add dev $IFACE parent 804: handle 3 protocol all prio 97 flow hash keys dst divisor 1024 } wifi On Tue, Mar 31, 2015 at 11:55 AM, Isaac Konikoff wrote: > Thanks for the feedback...I've been trying out the following based on > debloat.sh: > > The ath10k access point has two interfaces for these tests: > 1. virtual access point - vap1 > tc qdisc add dev vap1 handle 1 root mq > tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500 noecn > tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500 > tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500 > tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500 noecn > > 2. ethernet - eth1 > tc qdisc add dev eth1 root fq_codel > > For the netperf-wrapper tests, the 4 stations in use: > tc qdisc add dev sta101 root fq_codel target 30ms quantum 300 > tc qdisc add dev sta102 root fq_codel target 30ms quantum 300 > tc qdisc add dev sta103 root fq_codel target 30ms quantum 300 > tc qdisc add dev sta104 root fq_codel target 30ms quantum 300 > > I'm planning to re-run with these settings and then again at a lower mcs. > > > > > On 03/27/2015 08:31 PM, Dave Taht wrote: > > wonderful dataset isaac! A lot to learn there and quite a bit I can expla= in, > which might take me days to do with graphs and the like. > > But it's late, and unless you are planning on doing another test run I wi= ll > defer. > > It is mildly easier to look at this stuff in bulk, so I did a wget -l 1- = m > http://candelatech.com/downloads/wifi-reports/trial1/ on the data. > > Quick top level notes rather than write a massive blog with graph entry..= .. > > -1) These are totally artificial tests, stressing out queue management. > There are no > winners, or losers per se', only data. Someday we can get to winners and > losers, > but we have a zillion interrelated variables to isolate and fix first. So > consider this data a *baseline* for what wifi - at the highest rate possi= ble > - looks like today - and I'd dearly like some results that are below mcs4= on > average also as a baseline.... > > Typical wifi traffic looks nothing like rrul, for example. rrul vs rrul_b= e > is useful for showing how badly 802.11e queues actually work today, howev= er. > > 0) Pretty hard to get close to the underlying capability of the mac, isn'= t > it? Plenty of problems besides queue management could exist, including > running out of cpu.... > > 1) SFQ has a default packet limit of 128 packets which does not appear to= be > enough at these speeds. Bump it to 1000 for a more direct comparison to t= he > other qdiscs. > > You will note a rather big difference in cwnd on your packet captures, an= d > bandwidth usage more similar to pfifo_fast. I would expect, anyway. > > 2) I have generally felt that txops needed more of a "packing" approach t= o > wedging packets into a txop rather than a pure sfq or drr approach, as > losses tend to be bursty, and maximizing the number of flows in a txop a > goodness. SFQ packs better than DRR. > > That said there are so many compensation stuff (like retries) getting in = the > way right now... > > 3) The SFQ results being better than the fq_codel results in several case= s > are also due in part to an interaction of the drr quantum and a not high > enough target to compensate for wifi jitter. > > But in looking at SFQ you can't point to a lower latency and say that's > "better" when you also have a much lower achieved bandwidth. > > So I would appreciate a run where the stations had a fq_codel quantum 300 > and target 30ms. APs, on the other hand, would be better a larger > (incalculable, but say 4500) quantum, a similar target, and a per dst fil= ter > rather than the full 5 tuple. > > > > On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff > wrote: >> >> Thanks for pointing out horst. >> >> I've been trying wireshark io graphs such as: >> retry comparison: wlan.fc.retry=3D=3D0 (line) to wlan.fc.retry=3D=3D1 (= impulse) >> beacon delays: wlan.fc.type_subtype=3D=3D0x08 AVG frame.time_delta_disp= layed >> >> I've uploaded my pcap files, netperf-wrapper results and lanforge script >> reports which have some aggregate graphs below all of the pie charts. Th= e >> pcap files with 64sta in the name correspond to the script reports. >> >> candelatech.com/downloads/wifi-reports/trial1 >> >> I'll upload more once I try the qdisc suggestions and I'll generate >> comparison plots. >> >> Isaac >> >> >> On 03/27/2015 10:21 AM, Aaron Wood wrote: >> >> >> >> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith >> wrote: >>> >>> Using horst I've discovered that the major reason our WiFi network suck= s >>> is because 90% of the packets are sent at the 6mbit rate. Most of the = rest >>> show up in the 12 and 24mbit zone with a tiny fraction of them using th= e >>> higher MCS rates. >>> >>> Trying to couple the radiotap info with the packet decryption to discov= er >>> the sources of those low-bit rate packets is where I've been running in= to >>> difficulty. I can see the what but I haven't had much luck on the why. >>> >>> I totally agree with you that tools other than wireshark for analyzing >>> this seem to be non-existent. >> >> >> Using the following filter in Wireshark should get you all that 6Mbps >> traffic: >> >> radiotap.datarate =3D=3D 6 >> >> Then it's pretty easy to dig into what those are (by wifi frame-type, at >> least). At my network, that's mostly broadcast traffic (AP beacons and >> whatnot), as the corporate wifi has been set to use that rate as the >> broadcast rate. >> >> without capturing the WPA exchange, the contents of the data frames can'= t >> be seen, of course. >> >> -Aaron >> >> >> > > > > -- > Dave T=C3=A4ht > Let's make wifi fast, less jittery and reliable again! > > https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb > > > --=20 Dave T=C3=A4ht Let's make wifi fast, less jittery and reliable again! https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb