[Bloat] [Cerowrt-devel] capturing packets and applying qdiscs

Dave Taht dave.taht at gmail.com
Wed Apr 1 11:13:00 EDT 2015


Dear Isaac:

The core part you missed here: is for the APs only, in that you want a
per-dst filter in place for those to improve aggregation in addition
to the increased quantum.

I note that you are now... in a few short weeks - ahead of me on
testing wifi! I had put down this line of inquiry figuring that we´d
get around to native per station queues in the driver a year ago and
focused on fixing the uplinks (and raising money)

anyway, I just dug up what looks to be the right filter, but I could
use a sanity test on it, because try as I might I can´t seem to get
any statistics back from tc -s filter (example)

tc -s filter show dev gw10 parent 802:
filter protocol all pref 97 flow
filter protocol all pref 97 flow handle 0x3 hash keys dst divisor 1024
baseclass 802:1

which may mean it is not being installed right (maybe needs to be
attached to 1:1,2,3,4), and I am away from my lab til thursday, with
my darn test driver box stuck behind nat..... (if someone could run a
few rrul and rtt_fair tests through cerowrt with this filter in place
or not against a locally fast host, that would be great)

I stuck the hacky wifi-ap-only-script up at:

http://snapon.lab.bufferbloat.net/~d/debloat_ap.sh
(usage: IFACE=whatever ./debloat_ap.sh)

or, more simply
#!/bin/sh

IFACE=vap1
QDISC=fq_codel
FQ_OPTS="quantum 4542 target 30ms interval 300ms" # 1514*3

wifi() {
        tc qdisc add dev $IFACE handle 1 root mq
        tc qdisc add dev $IFACE parent 1:1 handle 801 $QDISC $FQ_OPTS noecn
        tc qdisc add dev $IFACE parent 1:2 handle 802 $QDISC $FQ_OPTS
        tc qdisc add dev $IFACE parent 1:3 handle 803 $QDISC $FQ_OPTS
        tc qdisc add dev $IFACE parent 1:4 handle 804 $QDISC $FQ_OPTS noecn
        # switch to a per dest figure
        tc filter add dev $IFACE parent 801: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
        tc filter add dev $IFACE parent 802: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
        tc filter add dev $IFACE parent 803: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
        tc filter add dev $IFACE parent 804: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
}

wifi

On Tue, Mar 31, 2015 at 11:55 AM, Isaac Konikoff
<konikofi at candelatech.com> wrote:
> Thanks for the feedback...I've been trying out the following based on
> debloat.sh:
>
> The ath10k access point has two interfaces for these tests:
> 1. virtual access point - vap1
> tc qdisc add dev vap1 handle 1 root mq
> tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500 noecn
> tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500
> tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500
> tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500 noecn
>
> 2. ethernet - eth1
> tc qdisc add dev eth1 root fq_codel
>
> For the netperf-wrapper tests, the 4 stations in use:
> tc qdisc add dev sta101 root fq_codel target 30ms quantum 300
> tc qdisc add dev sta102 root fq_codel target 30ms quantum 300
> tc qdisc add dev sta103 root fq_codel target 30ms quantum 300
> tc qdisc add dev sta104 root fq_codel target 30ms quantum 300
>
> I'm planning to re-run with these settings and then again at a lower mcs.
>
>
>
>
> On 03/27/2015 08:31 PM, Dave Taht wrote:
>
> wonderful dataset isaac! A lot to learn there and quite a bit I can explain,
> which might take me days to do with graphs and the like.
>
> But it's late, and unless you are planning on doing another test run I will
> defer.
>
> It is mildly easier to look at this stuff in bulk, so I did a wget -l 1- m
> http://candelatech.com/downloads/wifi-reports/trial1/ on the data.
>
> Quick top level notes rather than write a massive blog with graph entry....
>
> -1) These are totally artificial tests, stressing out queue management.
> There are no
> winners, or losers per se', only data. Someday we can get to winners and
> losers,
> but we have a zillion interrelated variables to isolate and fix first. So
> consider this data a *baseline* for what wifi - at the highest rate possible
> - looks like today - and I'd dearly like some results that are below mcs4 on
> average also as a baseline....
>
> Typical wifi traffic looks nothing like rrul, for example. rrul vs rrul_be
> is useful for showing how badly 802.11e queues actually work today, however.
>
> 0) Pretty hard to get close to the underlying capability of the mac, isn't
> it? Plenty of problems besides queue management could exist, including
> running out of cpu....
>
> 1) SFQ has a default packet limit of 128 packets which does not appear to be
> enough at these speeds. Bump it to 1000 for a more direct comparison to the
> other qdiscs.
>
> You will note a rather big difference in cwnd on your packet captures, and
> bandwidth usage more similar to pfifo_fast. I would expect, anyway.
>
> 2) I have generally felt that txops needed more of a "packing" approach to
> wedging packets into a txop rather than a pure sfq or drr approach, as
> losses tend to be bursty, and maximizing the number of flows in a txop a
> goodness.  SFQ packs better than DRR.
>
> That said there are so many compensation stuff (like retries) getting in the
> way right now...
>
> 3) The SFQ results being better than the fq_codel results in several cases
> are also due in part to an interaction of the drr quantum and a not high
> enough target to compensate for wifi jitter.
>
> But in looking at SFQ you can't point to a lower latency and say that's
> "better" when you also have a much lower achieved bandwidth.
>
> So I would appreciate a run where the stations had a fq_codel quantum 300
> and target 30ms. APs, on the other hand, would be better a larger
> (incalculable, but say 4500) quantum, a similar target, and a per dst filter
> rather than the full 5 tuple.
>
>
>
> On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff <konikofi at candelatech.com>
> wrote:
>>
>> Thanks for pointing out horst.
>>
>> I've been trying wireshark io graphs such as:
>> retry comparison:  wlan.fc.retry==0 (line) to wlan.fc.retry==1 (impulse)
>> beacon delays:  wlan.fc.type_subtype==0x08 AVG frame.time_delta_displayed
>>
>> I've uploaded my pcap files, netperf-wrapper results and lanforge script
>> reports which have some aggregate graphs below all of the pie charts. The
>> pcap files with 64sta in the name correspond to the script reports.
>>
>> candelatech.com/downloads/wifi-reports/trial1
>>
>> I'll upload more once I try the qdisc suggestions and I'll generate
>> comparison plots.
>>
>> Isaac
>>
>>
>> On 03/27/2015 10:21 AM, Aaron Wood wrote:
>>
>>
>>
>> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith <smithbone at gmail.com>
>> wrote:
>>>
>>> Using horst I've discovered that the major reason our WiFi network sucks
>>> is because 90% of the packets are sent at the 6mbit rate.  Most of the rest
>>> show up in the 12 and 24mbit zone with a tiny fraction of them using the
>>> higher MCS rates.
>>>
>>> Trying to couple the radiotap info with the packet decryption to discover
>>> the sources of those low-bit rate packets is where I've been running into
>>> difficulty.  I can see the what but I haven't had much luck on the why.
>>>
>>> I totally agree with you that tools other than wireshark for analyzing
>>> this seem to be non-existent.
>>
>>
>> Using the following filter in Wireshark should get you all that 6Mbps
>> traffic:
>>
>> radiotap.datarate == 6
>>
>> Then it's pretty easy to dig into what those are (by wifi frame-type, at
>> least).  At my network, that's mostly broadcast traffic (AP beacons and
>> whatnot), as the corporate wifi has been set to use that rate as the
>> broadcast rate.
>>
>> without capturing the WPA exchange, the contents of the data frames can't
>> be seen, of course.
>>
>> -Aaron
>>
>>
>>
>
>
>
> --
> Dave Täht
> Let's make wifi fast, less jittery and reliable again!
>
> https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
>
>
>



-- 
Dave Täht
Let's make wifi fast, less jittery and reliable again!

https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb



More information about the Bloat mailing list