Development issues regarding the cerowrt test router project
 help / color / mirror / Atom feed
From: Isaac Konikoff <konikofi@candelatech.com>
To: Dave Taht <dave.taht@gmail.com>
Cc: codel <codel@lists.bufferbloat.net>,
	bloat <bloat@lists.bufferbloat.net>,
	cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
Date: Tue, 31 Mar 2015 11:55:14 -0700	[thread overview]
Message-ID: <551AED92.2030105@candelatech.com> (raw)
In-Reply-To: <CAA93jw7cUy8L260Ankj9icMFONchKXfB3put2=PWvfSu2YpHrg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5922 bytes --]

Thanks for the feedback...I've been trying out the following based on 
debloat.sh:

The ath10k access point has two interfaces for these tests:
1. virtual access point - vap1
tc qdisc add dev vap1 handle 1 root mq
tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500 noecn
tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500
tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500
tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500 noecn

2. ethernet - eth1
tc qdisc add dev eth1 root fq_codel

For the netperf-wrapper tests, the 4 stations in use:
tc qdisc add dev sta101 root fq_codel target 30ms quantum 300
tc qdisc add dev sta102 root fq_codel target 30ms quantum 300
tc qdisc add dev sta103 root fq_codel target 30ms quantum 300
tc qdisc add dev sta104 root fq_codel target 30ms quantum 300

I'm planning to re-run with these settings and then again at a lower mcs.



On 03/27/2015 08:31 PM, Dave Taht wrote:
> wonderful dataset isaac! A lot to learn there and quite a bit I can 
> explain, which might take me days to do with graphs and the like.
>
> But it's late, and unless you are planning on doing another test run I 
> will defer.
>
> It is mildly easier to look at this stuff in bulk, so I did a wget -l 
> 1- m http://candelatech.com/downloads/wifi-reports/trial1/ on the data.
>
> Quick top level notes rather than write a massive blog with graph 
> entry....
>
> -1) These are totally artificial tests, stressing out queue 
> management. There are no
> winners, or losers per se', only data. Someday we can get to winners 
> and losers,
> but we have a zillion interrelated variables to isolate and fix first. 
> So consider this data a *baseline* for what wifi - at the highest rate 
> possible - looks like today - and I'd dearly like some results that 
> are below mcs4 on average also as a baseline....
>
> Typical wifi traffic looks nothing like rrul, for example. rrul vs 
> rrul_be is useful for showing how badly 802.11e queues actually work 
> today, however.
>
> 0) Pretty hard to get close to the underlying capability of the mac, 
> isn't it? Plenty of problems besides queue management could exist, 
> including running out of cpu....
>
> 1) SFQ has a default packet limit of 128 packets which does not appear 
> to be enough at these speeds. Bump it to 1000 for a more direct 
> comparison to the other qdiscs.
>
> You will note a rather big difference in cwnd on your packet captures, 
> and bandwidth usage more similar to pfifo_fast. I would expect, anyway.
>
> 2) I have generally felt that txops needed more of a "packing" 
> approach to wedging packets into a txop rather than a pure sfq or drr 
> approach, as losses tend to be bursty, and maximizing the number of 
> flows in a txop a goodness.  SFQ packs better than DRR.
>
> That said there are so many compensation stuff (like retries) getting 
> in the way right now...
>
> 3) The SFQ results being better than the fq_codel results in several 
> cases are also due in part to an interaction of the drr quantum and a 
> not high enough target to compensate for wifi jitter.
>
> But in looking at SFQ you can't point to a lower latency and say 
> that's "better" when you also have a much lower achieved bandwidth.
>
> So I would appreciate a run where the stations had a fq_codel quantum 
> 300 and target 30ms. APs, on the other hand, would be better a larger 
> (incalculable, but say 4500) quantum, a similar target, and a per dst 
> filter rather than the full 5 tuple.
>
>
>
> On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff 
> <konikofi@candelatech.com <mailto:konikofi@candelatech.com>> wrote:
>
>     Thanks for pointing out horst.
>
>     I've been trying wireshark io graphs such as:
>     retry comparison:  wlan.fc.retry==0 (line) to wlan.fc.retry==1
>     (impulse)
>     beacon delays:  wlan.fc.type_subtype==0x08 AVG
>     frame.time_delta_displayed
>
>     I've uploaded my pcap files, netperf-wrapper results and lanforge
>     script reports which have some aggregate graphs below all of the
>     pie charts. The pcap files with 64sta in the name correspond to
>     the script reports.
>
>     candelatech.com/downloads/wifi-reports/trial1
>     <http://candelatech.com/downloads/wifi-reports/trial1>
>
>     I'll upload more once I try the qdisc suggestions and I'll
>     generate comparison plots.
>
>     Isaac
>
>
>     On 03/27/2015 10:21 AM, Aaron Wood wrote:
>>
>>
>>     On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith
>>     <smithbone@gmail.com <mailto:smithbone@gmail.com>> wrote:
>>
>>         Using horst I've discovered that the major reason our WiFi
>>         network sucks is because 90% of the packets are sent at the
>>         6mbit rate.  Most of the rest show up in the 12 and 24mbit
>>         zone with a tiny fraction of them using the higher MCS rates.
>>
>>         Trying to couple the radiotap info with the packet decryption
>>         to discover the sources of those low-bit rate packets is
>>         where I've been running into difficulty.  I can see the what
>>         but I haven't had much luck on the why.
>>
>>         I totally agree with you that tools other than wireshark for
>>         analyzing this seem to be non-existent.
>>
>>
>>     Using the following filter in Wireshark should get you all that
>>     6Mbps traffic:
>>
>>     radiotap.datarate == 6
>>
>>     Then it's pretty easy to dig into what those are (by wifi
>>     frame-type, at least). At my network, that's mostly broadcast
>>     traffic (AP beacons and whatnot), as the corporate wifi has been
>>     set to use that rate as the broadcast rate.
>>
>>     without capturing the WPA exchange, the contents of the data
>>     frames can't be seen, of course.
>>
>>     -Aaron
>
>
>
>
>
> -- 
> Dave Täht
> Let's make wifi fast, less jittery and reliable again!
>
> https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb



[-- Attachment #2: Type: text/html, Size: 11001 bytes --]

  reply	other threads:[~2015-03-31 18:55 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-26 21:39 [Cerowrt-devel] " Isaac Konikoff
2015-03-27  0:39 ` [Cerowrt-devel] [Bloat] " David Lang
2015-03-27 16:38   ` Isaac Konikoff
2015-03-27 17:15     ` Aaron Wood
2015-03-27 18:13       ` Richard Smith
2015-03-27  1:19 ` Dave Taht
2015-03-27  1:37   ` Dave Taht
2015-03-27 15:08   ` Richard Smith
2015-03-27 17:21     ` Aaron Wood
2015-03-27 18:13       ` Richard Smith
2015-03-27 19:00       ` Isaac Konikoff
2015-03-27 19:23         ` David Lang
2015-03-27 19:52         ` Richard Smith
2015-03-28  3:31         ` Dave Taht
2015-03-31 18:55           ` Isaac Konikoff [this message]
2015-04-01 15:13             ` Dave Taht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=551AED92.2030105@candelatech.com \
    --to=konikofi@candelatech.com \
    --cc=bloat@lists.bufferbloat.net \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=codel@lists.bufferbloat.net \
    --cc=dave.taht@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox