<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Thanks for the feedback...I've been trying out the following based
on debloat.sh:<br>
<br>
The ath10k access point has two interfaces for these tests:<br>
1. virtual access point - vap1<br>
tc qdisc add dev vap1 handle 1 root mq<br>
tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500
noecn<br>
tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500<br>
tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500<br>
tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500
noecn<br>
<br>
2. ethernet - eth1<br>
tc qdisc add dev eth1 root fq_codel<br>
<br>
For the netperf-wrapper tests, the 4 stations in use:<br>
tc qdisc add dev sta101 root fq_codel target 30ms quantum 300<br>
tc qdisc add dev sta102 root fq_codel target 30ms quantum 300<br>
tc qdisc add dev sta103 root fq_codel target 30ms quantum 300<br>
tc qdisc add dev sta104 root fq_codel target 30ms quantum 300<br>
<br>
I'm planning to re-run with these settings and then again at a lower
mcs.<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 03/27/2015 08:31 PM, Dave Taht
wrote:<br>
</div>
<blockquote
cite="mid:CAA93jw7cUy8L260Ankj9icMFONchKXfB3put2=PWvfSu2YpHrg@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>wonderful dataset isaac! A lot to learn there and
quite a bit I can explain, which might take me days
to do with graphs and the like.<br>
<br>
</div>
But it's late, and unless you are planning on doing
another test run I will defer.<br>
<br>
</div>
It is mildly easier to look at this stuff in bulk, so I
did a wget -l 1- m <a moz-do-not-send="true"
href="http://candelatech.com/downloads/wifi-reports/trial1/"
target="_blank">http://candelatech.com/downloads/wifi-reports/trial1/</a>
on the data.<br>
<br>
</div>
Quick top level notes rather than write a massive blog
with graph entry....<br>
</div>
<div><br>
</div>
<div>-1) These are totally artificial tests, stressing out
queue management. There are no<br>
</div>
<div>winners, or losers per se', only data. Someday we can
get to winners and losers,<br>
</div>
<div>but we have a zillion interrelated variables to isolate
and fix first. So consider this data a *baseline* for what
wifi - at the highest rate possible - looks like today -
and I'd dearly like some results that are below mcs4 on
average also as a baseline....<br>
</div>
<div><br>
Typical wifi traffic looks nothing like rrul, for example.
rrul vs rrul_be is useful for showing how badly 802.11e
queues actually work today, however.<br>
<br>
</div>
<div>0) Pretty hard to get close to the underlying
capability of the mac, isn't it? Plenty of problems
besides queue management could exist, including running
out of cpu....<br>
</div>
<br>
1) SFQ has a default packet limit of 128 packets which does
not appear to be enough at these speeds. Bump it to 1000 for
a more direct comparison to the other qdiscs. <br>
<br>
</div>
<div>You will note a rather big difference in cwnd on your
packet captures, and bandwidth usage more similar to
pfifo_fast. I would expect, anyway.<br>
<br>
</div>
2) I have generally felt that txops needed more of a "packing"
approach to wedging packets into a txop rather than a pure sfq
or drr approach, as losses tend to be bursty, and maximizing
the number of flows in a txop a goodness. SFQ packs better
than DRR.<br>
<br>
That said there are so many compensation stuff (like retries)
getting in the way right now...<br>
<br>
</div>
3) The SFQ results being better than the fq_codel results in
several cases are also due in part to an interaction of the drr
quantum and a not high enough target to compensate for wifi
jitter. <br>
<br>
But in looking at SFQ you can't point to a lower latency and say
that's "better" when you also have a much lower achieved
bandwidth.<br>
<br>
So I would appreciate a run where the stations had a fq_codel
quantum 300 and target 30ms. APs, on the other hand, would be
better a larger (incalculable, but say 4500) quantum, a similar
target, and a per dst filter rather than the full 5 tuple.<br>
<div>
<div>
<div>
<div><br>
<br>
</div>
</div>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri, Mar 27, 2015 at 12:00 PM, Isaac
Konikoff <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:konikofi@candelatech.com" target="_blank">konikofi@candelatech.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Thanks for pointing
out horst.<br>
<br>
I've been trying wireshark io graphs such as:<br>
retry comparison: wlan.fc.retry==0 (line) to
wlan.fc.retry==1 (impulse)<br>
beacon delays: wlan.fc.type_subtype==0x08 AVG
frame.time_delta_displayed<br>
<br>
I've uploaded my pcap files, netperf-wrapper results and
lanforge script reports which have some aggregate graphs
below all of the pie charts. The pcap files with 64sta in
the name correspond to the script reports.<br>
<br>
<a moz-do-not-send="true"
href="http://candelatech.com/downloads/wifi-reports/trial1"
target="_blank">candelatech.com/downloads/wifi-reports/trial1</a><br>
<br>
I'll upload more once I try the qdisc suggestions and I'll
generate comparison plots.<span class="HOEnZb"><font
color="#888888"><br>
<br>
Isaac</font></span>
<div>
<div class="h5"><br>
<br>
<div>On 03/27/2015 10:21 AM, Aaron Wood wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri, Mar 27, 2015 at
8:08 AM, Richard Smith <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:smithbone@gmail.com"
target="_blank">smithbone@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Using horst
I've discovered that the major reason our
WiFi network sucks is because 90% of the
packets are sent at the 6mbit rate. Most of
the rest show up in the 12 and 24mbit zone
with a tiny fraction of them using the
higher MCS rates.<br>
<br>
Trying to couple the radiotap info with the
packet decryption to discover the sources of
those low-bit rate packets is where I've
been running into difficulty. I can see the
what but I haven't had much luck on the why.<br>
<br>
I totally agree with you that tools other
than wireshark for analyzing this seem to be
non-existent.</blockquote>
<div><br>
</div>
<div>Using the following filter in Wireshark
should get you all that 6Mbps traffic: </div>
<div><br>
</div>
<div>radiotap.datarate == 6</div>
<div><br>
</div>
<div>Then it's pretty easy to dig into what
those are (by wifi frame-type, at least).
At my network, that's mostly broadcast
traffic (AP beacons and whatnot), as the
corporate wifi has been set to use that rate
as the broadcast rate.</div>
<div><br>
</div>
<div>without capturing the WPA exchange, the
contents of the data frames can't be seen,
of course.</div>
<div><br>
</div>
<div>-Aaron</div>
</div>
</div>
</div>
</blockquote>
<br>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div class="gmail_signature">Dave Täht<br>
Let's make wifi fast, less jittery and reliable again!<br>
<br>
<a moz-do-not-send="true"
href="https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb"
target="_blank">https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb</a><br>
</div>
</div>
</blockquote>
<br>
<br>
</body>
</html>