* [Cerowrt-devel] capturing packets and applying qdiscs
@ 2015-03-26 21:39 Isaac Konikoff
2015-03-27 0:39 ` [Cerowrt-devel] [Bloat] " David Lang
2015-03-27 1:19 ` Dave Taht
0 siblings, 2 replies; 16+ messages in thread
From: Isaac Konikoff @ 2015-03-26 21:39 UTC (permalink / raw)
To: bloat; +Cc: codel, cerowrt-devel
Hi All,
Looking for some feedback in my test setup...
Can you please review my setup and let me know how to improve my
application of the qdiscs? I've been applying manually, but I'm not sure
that is the best method, or if the values really make sense. Sorry if
this has been covered ad nauseum in codel or bloat threads over the past
4+ years...
I've been capturing packets on a dedicated monitor box using the
following method:
tshark -i moni1 -w <file>
where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz, center1:
5755 MHz
The system under test is a lanforge ath10k ap being driven by another
lanforge system using ath9k clients to associate and run traffic tests.
The two traffic tests I'm running are:
1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
2. lanforge wifi capacity test using tcp-download incrementing 4 sta's
per minute up to 64 sta's with each iteration attempting 500Mbps
download per x number of sta's.
The qdiscs I am using are applied to the virtual ap interface which is
the egress interface for download tests. I also applied the same qdisc
to the ap's eth1 for the few upload tests. Is this sane?
qdiscs used, deleting each before trying the next:
1. default pfifo_fast
2. tc qdisc add dev vap1 root fq_codel
3. tc qdisc add dev vap1 root fq_codel target 5ms interval 100ms noecn
4. tc qdisc add dev vap1 root fq_codel limit 2000 target 3ms interval
40ms noecn
Any suggestions you have would be helpful.
Thanks,
Isaac
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-26 21:39 [Cerowrt-devel] capturing packets and applying qdiscs Isaac Konikoff
@ 2015-03-27 0:39 ` David Lang
2015-03-27 16:38 ` Isaac Konikoff
2015-03-27 1:19 ` Dave Taht
1 sibling, 1 reply; 16+ messages in thread
From: David Lang @ 2015-03-27 0:39 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, cerowrt-devel, bloat
On Thu, 26 Mar 2015, Isaac Konikoff wrote:
> Hi All,
>
> Looking for some feedback in my test setup...
>
> Can you please review my setup and let me know how to improve my application
> of the qdiscs? I've been applying manually, but I'm not sure that is the best
> method, or if the values really make sense. Sorry if this has been covered ad
> nauseum in codel or bloat threads over the past 4+ years...
>
> I've been capturing packets on a dedicated monitor box using the following
> method:
>
> tshark -i moni1 -w <file>
>
> where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz, center1: 5755
> MHz
>
> The system under test is a lanforge ath10k ap being driven by another
> lanforge system using ath9k clients to associate and run traffic tests.
>
> The two traffic tests I'm running are:
>
> 1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
> tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
>
> 2. lanforge wifi capacity test using tcp-download incrementing 4 sta's per
> minute up to 64 sta's with each iteration attempting 500Mbps download per x
> number of sta's.
what results are you getting? and what results are you hoping to get to?
David Lang
> The qdiscs I am using are applied to the virtual ap interface which is the
> egress interface for download tests. I also applied the same qdisc to the
> ap's eth1 for the few upload tests. Is this sane?
>
> qdiscs used, deleting each before trying the next:
> 1. default pfifo_fast
> 2. tc qdisc add dev vap1 root fq_codel
> 3. tc qdisc add dev vap1 root fq_codel target 5ms interval 100ms noecn
> 4. tc qdisc add dev vap1 root fq_codel limit 2000 target 3ms interval 40ms
> noecn
>
> Any suggestions you have would be helpful.
>
> Thanks,
> Isaac
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-26 21:39 [Cerowrt-devel] capturing packets and applying qdiscs Isaac Konikoff
2015-03-27 0:39 ` [Cerowrt-devel] [Bloat] " David Lang
@ 2015-03-27 1:19 ` Dave Taht
2015-03-27 1:37 ` Dave Taht
2015-03-27 15:08 ` Richard Smith
1 sibling, 2 replies; 16+ messages in thread
From: Dave Taht @ 2015-03-27 1:19 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, cerowrt-devel, bloat
On Thu, Mar 26, 2015 at 2:39 PM, Isaac Konikoff
<konikofi@candelatech.com> wrote:
> Hi All,
>
> Looking for some feedback in my test setup...
>
> Can you please review my setup and let me know how to improve my application
> of the qdiscs? I've been applying manually, but I'm not sure that is the
> best method, or if the values really make sense. Sorry if this has been
> covered ad nauseum in codel or bloat threads over the past 4+ years...
>
> I've been capturing packets on a dedicated monitor box using the following
> method:
>
> tshark -i moni1 -w <file>
>
> where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz, center1: 5755
> MHz
For those of you that don't know how to do aircaps, it is pretty easy.
We are going to be doing a lot more of this as make-wifi-fast goes
along, so...
install aircrack-ng via whatever means you have available (works best
on ath9k, seems to work on iwl, don't know about other devices)
run:
airmon-ng start your_wifi_device your_channel
This will create a monX device of some sort, which you can then
capture with tshark or wireshark. There are all sorts of other cool
features here where - for example - you can post-hoc decrypt a wpa
session, etc.
Note that usually you will have trouble using the device for other
things, so I tend to just run it with the ethernet also connected.
We are in dire need of tools that can analyze aircap'd stuff at
different rates, look at beacons, interpacket gaps, wireless g
fallbacks, etc. If anyone knows f anything good, please post to the
list.
> The system under test is a lanforge ath10k ap being driven by another
> lanforge system using ath9k clients to associate and run traffic tests.
>
> The two traffic tests I'm running are:
>
> 1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
> tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
Cool.
> 2. lanforge wifi capacity test using tcp-download incrementing 4 sta's per
> minute up to 64 sta's with each iteration attempting 500Mbps download per x
> number of sta's.
>
> The qdiscs I am using are applied to the virtual ap interface which is the
> egress interface for download tests. I also applied the same qdisc to the
> ap's eth1 for the few upload tests. Is this sane?
>
> qdiscs used, deleting each before trying the next:
> 1. default pfifo_fast
> 2. tc qdisc add dev vap1 root fq_codel
> 3. tc qdisc add dev vap1 root fq_codel target 5ms interval 100ms noecn
1) Test 2 and test 3 are essentially the same, unless you have also
enabled ecn on both sides of the tcp connection with
sysctl -w net.ipv4.tcp_ecn=1 #or the equivalent in sysctl.conf
The ecn vs non-ecn results tend to smoother results for tcp and mildly
higher packet loss on the measurement flows.
2) I do not have an ath10k in front of me. The ath9k presents 4 queues
controlled by
mq (and then some sub-qdisc) when it is in operation, as does the iwl.
Does the ath10k only present one queue?
On the two chipsets mentioned first, the queues are mapped to the 802.11e
VO,VI,BE, and BK queues - very inefficiently. I have long maintained
the VO queue should be obsoleted in favor of the VI queue, and in
general I find wireless-n works better if these queues are entirely
disabled on the AP.
This extremely old piece of code does more of the right thing for the
mq'd style of
wifi interface, although it is pretty wrong for everything else
(notably, we typically only use a reduced quantum of 300 on some low
speed devices, we never got around to making the tg3 work right, and
the tc filter is not the right thing for wifi, either)
https://github.com/dtaht/deBloat/blob/master/src/debloat.sh
> 4. tc qdisc add dev vap1 root fq_codel limit 2000 target 3ms interval 40ms
> noecn
Here there are about 3 assumptions wrong.
1) 1000 packets is still quite enough for even 802.11ac wifi (or so I think).
2) although fiddling with the target and interval is done here, there
is so much underlying buffering that these numbers are not going to
help much in the face of them on wifi. I typically actually run with a
much larger target (30ms) to cope with wifi's mac access jitter - with
the default interval when trying to improve per-station performance
along with...
3) The real parameters that will help wifi on an AP somewhat is to use
a tc dst filter (rather than the default 5 tuple filter) on fq_codel
to sort stuff into per station queues, and to use a quantum in the
4500 range, which accounts for either the max number of packets that
can be put in a txop (42 on wireless-n), and/or 3 big packets -
neither solution being a good one when wifi can handle 64k in a single
burst, and ac, more.
Even then, the results are far less than pleasing. What is needed, and
what we are going to do, is add real per-station queuing at the lowest
layer and then put something fq_codel like on top of each... and that
work hasn't started yet. The tc filter method I just described will
not work on station ids and thus will treat ipv4 and ipv6 traffic for
the same destination differently.
Now I do have wifi results for this stuff - somewhere - and the right
tc filter for dst filtering on a per mq basis, but it turns out I
think I left all that behind a natted box that I can't get back to til
thursday next week.
and as always I appreciate every scrap of data, every experiment,
every result obtained via every method, in order to more fully bracket
the real problems and demonstrate progress against wifi's problems, if
and when we start making it. a tarball of what you got would be nice
to have around.
You will see absolutely terrible per-sta download performance on the
rrul and rrul_be tests in particular with any of the qdiscs.
>
> Any suggestions you have would be helpful.
>
> Thanks,
> Isaac
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
Let's make wifi fast, less jittery and reliable again!
https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 1:19 ` Dave Taht
@ 2015-03-27 1:37 ` Dave Taht
2015-03-27 15:08 ` Richard Smith
1 sibling, 0 replies; 16+ messages in thread
From: Dave Taht @ 2015-03-27 1:37 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, cerowrt-devel, bloat
On Thu, Mar 26, 2015 at 6:19 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Thu, Mar 26, 2015 at 2:39 PM, Isaac Konikoff
> <konikofi@candelatech.com> wrote:
>> Hi All,
>>
>> Looking for some feedback in my test setup...
>>
>> Can you please review my setup and let me know how to improve my application
>> of the qdiscs? I've been applying manually, but I'm not sure that is the
>> best method, or if the values really make sense. Sorry if this has been
>> covered ad nauseum in codel or bloat threads over the past 4+ years...
>>
>> I've been capturing packets on a dedicated monitor box using the following
>> method:
>>
>> tshark -i moni1 -w <file>
>>
>> where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz, center1: 5755
>> MHz
>
> For those of you that don't know how to do aircaps, it is pretty easy.
> We are going to be doing a lot more of this as make-wifi-fast goes
> along, so...
>
> install aircrack-ng via whatever means you have available (works best
> on ath9k, seems to work on iwl, don't know about other devices)
>
> run:
>
> airmon-ng start your_wifi_device your_channel
>
> This will create a monX device of some sort, which you can then
> capture with tshark or wireshark. There are all sorts of other cool
> features here where - for example - you can post-hoc decrypt a wpa
> session, etc.
>
> Note that usually you will have trouble using the device for other
> things, so I tend to just run it with the ethernet also connected.
>
> We are in dire need of tools that can analyze aircap'd stuff at
> different rates, look at beacons, interpacket gaps, wireless g
> fallbacks, etc. If anyone knows f anything good, please post to the
> list.
>
>> The system under test is a lanforge ath10k ap being driven by another
>> lanforge system using ath9k clients to associate and run traffic tests.
>>
>> The two traffic tests I'm running are:
>>
>> 1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
>> tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
>
> Cool.
>
>> 2. lanforge wifi capacity test using tcp-download incrementing 4 sta's per
>> minute up to 64 sta's with each iteration attempting 500Mbps download per x
>> number of sta's.
>>
>> The qdiscs I am using are applied to the virtual ap interface which is the
>> egress interface for download tests. I also applied the same qdisc to the
>> ap's eth1 for the few upload tests. Is this sane?
by all means apply fq_codel on the ethernet devices. At these speeds you
will see the fq part engage at least occasionally, which you can see by
tc -s qdisc show dev eth1
showing a few overlimits or new flows. If it does not engage at these
speeds, you
will see maxpacket not crack 256 as that statistic is not collected
unless stuff engages.
if maxpacket is greater than 1514 you have offloads turned on somewhere.
>> qdiscs used, deleting each before trying the next:
>> 1. default pfifo_fast
>> 2. tc qdisc add dev vap1 root fq_codel
>> 3. tc qdisc add dev vap1 root fq_codel target 5ms interval 100ms noecn
>
> 1) Test 2 and test 3 are essentially the same, unless you have also
> enabled ecn on both sides of the tcp connection with
>
> sysctl -w net.ipv4.tcp_ecn=1 #or the equivalent in sysctl.conf
>
> The ecn vs non-ecn results tend to smoother results for tcp and mildly
> higher packet loss on the measurement flows.
>
> 2) I do not have an ath10k in front of me. The ath9k presents 4 queues
> controlled by
> mq (and then some sub-qdisc) when it is in operation, as does the iwl.
> Does the ath10k only present one queue?
>
> On the two chipsets mentioned first, the queues are mapped to the 802.11e
> VO,VI,BE, and BK queues - very inefficiently. I have long maintained
> the VO queue should be obsoleted in favor of the VI queue, and in
> general I find wireless-n works better if these queues are entirely
> disabled on the AP.
>
> This extremely old piece of code does more of the right thing for the
> mq'd style of
> wifi interface, although it is pretty wrong for everything else
> (notably, we typically only use a reduced quantum of 300 on some low
> speed devices, we never got around to making the tg3 work right, and
> the tc filter is not the right thing for wifi, either)
>
> https://github.com/dtaht/deBloat/blob/master/src/debloat.sh
>
>> 4. tc qdisc add dev vap1 root fq_codel limit 2000 target 3ms interval 40ms
>> noecn
>
> Here there are about 3 assumptions wrong.
>
> 1) 1000 packets is still quite enough for even 802.11ac wifi (or so I think).
> 2) although fiddling with the target and interval is done here, there
> is so much underlying buffering that these numbers are not going to
> help much in the face of them on wifi. I typically actually run with a
> much larger target (30ms) to cope with wifi's mac access jitter - with
> the default interval when trying to improve per-station performance
> along with...
>
> 3) The real parameters that will help wifi on an AP somewhat is to use
> a tc dst filter (rather than the default 5 tuple filter) on fq_codel
> to sort stuff into per station queues, and to use a quantum in the
> 4500 range, which accounts for either the max number of packets that
> can be put in a txop (42 on wireless-n), and/or 3 big packets -
> neither solution being a good one when wifi can handle 64k in a single
> burst, and ac, more.
>
>
> Even then, the results are far less than pleasing. What is needed, and
> what we are going to do, is add real per-station queuing at the lowest
> layer and then put something fq_codel like on top of each... and that
> work hasn't started yet. The tc filter method I just described will
> not work on station ids and thus will treat ipv4 and ipv6 traffic for
> the same destination differently.
>
> Now I do have wifi results for this stuff - somewhere - and the right
> tc filter for dst filtering on a per mq basis, but it turns out I
> think I left all that behind a natted box that I can't get back to til
> thursday next week.
>
> and as always I appreciate every scrap of data, every experiment,
> every result obtained via every method, in order to more fully bracket
> the real problems and demonstrate progress against wifi's problems, if
> and when we start making it. a tarball of what you got would be nice
> to have around.
>
> You will see absolutely terrible per-sta download performance on the
> rrul and rrul_be tests in particular with any of the qdiscs.
>>
>> Any suggestions you have would be helpful.
>>
>> Thanks,
>> Isaac
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Let's make wifi fast, less jittery and reliable again!
>
> https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
--
Dave Täht
Let's make wifi fast, less jittery and reliable again!
https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 1:19 ` Dave Taht
2015-03-27 1:37 ` Dave Taht
@ 2015-03-27 15:08 ` Richard Smith
2015-03-27 17:21 ` Aaron Wood
1 sibling, 1 reply; 16+ messages in thread
From: Richard Smith @ 2015-03-27 15:08 UTC (permalink / raw)
To: Dave Taht, Isaac Konikoff; +Cc: codel, cerowrt-devel, bloat
On 03/26/2015 09:19 PM, Dave Taht wrote:
> For those of you that don't know how to do aircaps, it is pretty easy.
> We are going to be doing a lot more of this as make-wifi-fast goes
> along, so...
>
> install aircrack-ng via whatever means you have available (works best
> on ath9k, seems to work on iwl, don't know about other devices)
>
> run:
>
> airmon-ng start your_wifi_device your_channel
I've been doing a lot of this lately... I would love to create a
resource page (and I volunteer to help compile and organize) for best
practices and recipes on sniffing/processing/understanding WiFi traffic.
In my experience it's fraught with conflicting and confusing
instructions that have a lot of context never described.
Installing airmon-ng isn't always an option. I've also had airmon-ng
fail a lot of times on iwl. I haven't used it much on the wndr because
I use 'iw' instead.
What is working well for me on most of the devices I've tried (including
iwl) is just to use 'iw' natively.
iw <wlandevice> interface add <monitordevice> type monitor
So for example on a wndr box I use for sniffing I do:
iw wlan1 interface add mon1 type monitor
Then you can set the channel with:
iw wlan1 set channel 6
Generally to set the channel you need the interface to be down and
sometimes you have to just reboot the box to get the device back in to a
known state where it will accept commands.
> This will create a monX device of some sort, which you can then
> capture with tshark or wireshark. There are all sorts of other cool
> features here where - for example - you can post-hoc decrypt a wpa
> session, etc.
Decrypting traffic has taken me quite a while to get working and I've
only had partial success. One forehead slapper is that you have to
capture the key exchange when the station connects to the network. You
can't just randomly start sniffing and then decrypt later with the WPA
pass phrase. Even then I have sessions I can't decrypt and I don't know
why. I'd love to hear recipes used by others that are working.
> We are in dire need of tools that can analyze aircap'd stuff at
> different rates, look at beacons, interpacket gaps, wireless g
> fallbacks, etc. If anyone knows f anything good, please post to the
> list.
One tool that has been informative for me looking at our work network
has been horst. http://br1.einfach.org/tech/horst/
It's a live diagnostics tool but it would probably not take too much
work to modify it to be able to take a pcap file as input.
The latest git versions have good stuff thats not in the releases. If
anyone wants a git build for wndr3700v2 let me know and I'll pass it along.
Using horst I've discovered that the major reason our WiFi network sucks
is because 90% of the packets are sent at the 6mbit rate. Most of the
rest show up in the 12 and 24mbit zone with a tiny fraction of them
using the higher MCS rates.
Trying to couple the radiotap info with the packet decryption to
discover the sources of those low-bit rate packets is where I've been
running into difficulty. I can see the what but I haven't had much luck
on the why.
I totally agree with you that tools other than wireshark for analyzing
this seem to be non-existent.
--
Richard A. Smith
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 0:39 ` [Cerowrt-devel] [Bloat] " David Lang
@ 2015-03-27 16:38 ` Isaac Konikoff
2015-03-27 17:15 ` Aaron Wood
0 siblings, 1 reply; 16+ messages in thread
From: Isaac Konikoff @ 2015-03-27 16:38 UTC (permalink / raw)
To: David Lang; +Cc: codel, cerowrt-devel, bloat
On 03/26/2015 05:39 PM, David Lang wrote:
> On Thu, 26 Mar 2015, Isaac Konikoff wrote:
>
>> Hi All,
>>
>> Looking for some feedback in my test setup...
>>
>> Can you please review my setup and let me know how to improve my
>> application of the qdiscs? I've been applying manually, but I'm not
>> sure that is the best method, or if the values really make sense.
>> Sorry if this has been covered ad nauseum in codel or bloat threads
>> over the past 4+ years...
>>
>> I've been capturing packets on a dedicated monitor box using the
>> following method:
>>
>> tshark -i moni1 -w <file>
>>
>> where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz,
>> center1: 5755 MHz
>>
>> The system under test is a lanforge ath10k ap being driven by another
>> lanforge system using ath9k clients to associate and run traffic tests.
>>
>> The two traffic tests I'm running are:
>>
>> 1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
>> tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
>>
>> 2. lanforge wifi capacity test using tcp-download incrementing 4
>> sta's per minute up to 64 sta's with each iteration attempting
>> 500Mbps download per x number of sta's.
>
> what results are you getting? and what results are you hoping to get to?
>
> David Lang
I'll share my results shortly, but the main idea is that I'm doing the
captures as part of our effort to improve the ath10k driver. Just one
comparison is that with many clients the ath10k ap throughput tails off
whereas some non-ath10k ap's are able to sustain high throughput for
many clients, but even that depends on manufacturer and firmware combos.
I'll be able to point this behaviour out better once I get the files
uploaded...
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 16:38 ` Isaac Konikoff
@ 2015-03-27 17:15 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
0 siblings, 1 reply; 16+ messages in thread
From: Aaron Wood @ 2015-03-27 17:15 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 2751 bytes --]
I do this often at work, using a separate machine to capture traffic using
wireshark. Wireshark makes a lot of the analysis fairly straightforward
(especially with it's excellent packet dissectors). By capturing in
radiotap mode, you get RSSI/noise levels on the 802.11n packet, the rates
involved, everything. It's very nice for digging into issues.
Unfortunately, the next problem is a "can't see the forest for the trees",
as there are no good high-level analysis tools for captured traffic that
I've found. Most of the commercial packages seem to offer summary stats,
but not much more (nothing like airtime utilization over time, negotiated
rates over time, aggregate/per-station throughput over time, etc.)
-Aaron
On Fri, Mar 27, 2015 at 9:38 AM, Isaac Konikoff <konikofi@candelatech.com>
wrote:
>
>
> On 03/26/2015 05:39 PM, David Lang wrote:
>
>> On Thu, 26 Mar 2015, Isaac Konikoff wrote:
>>
>> Hi All,
>>>
>>> Looking for some feedback in my test setup...
>>>
>>> Can you please review my setup and let me know how to improve my
>>> application of the qdiscs? I've been applying manually, but I'm not sure
>>> that is the best method, or if the values really make sense. Sorry if this
>>> has been covered ad nauseum in codel or bloat threads over the past 4+
>>> years...
>>>
>>> I've been capturing packets on a dedicated monitor box using the
>>> following method:
>>>
>>> tshark -i moni1 -w <file>
>>>
>>> where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz, center1:
>>> 5755 MHz
>>>
>>> The system under test is a lanforge ath10k ap being driven by another
>>> lanforge system using ath9k clients to associate and run traffic tests.
>>>
>>> The two traffic tests I'm running are:
>>>
>>> 1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
>>> tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
>>>
>>> 2. lanforge wifi capacity test using tcp-download incrementing 4 sta's
>>> per minute up to 64 sta's with each iteration attempting 500Mbps download
>>> per x number of sta's.
>>>
>>
>> what results are you getting? and what results are you hoping to get to?
>>
>> David Lang
>>
> I'll share my results shortly, but the main idea is that I'm doing the
> captures as part of our effort to improve the ath10k driver. Just one
> comparison is that with many clients the ath10k ap throughput tails off
> whereas some non-ath10k ap's are able to sustain high throughput for many
> clients, but even that depends on manufacturer and firmware combos.
>
> I'll be able to point this behaviour out better once I get the files
> uploaded...
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 3680 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 15:08 ` Richard Smith
@ 2015-03-27 17:21 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
2015-03-27 19:00 ` Isaac Konikoff
0 siblings, 2 replies; 16+ messages in thread
From: Aaron Wood @ 2015-03-27 17:21 UTC (permalink / raw)
To: Richard Smith; +Cc: bloat, codel, cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 1099 bytes --]
On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith <smithbone@gmail.com> wrote:
> Using horst I've discovered that the major reason our WiFi network sucks
> is because 90% of the packets are sent at the 6mbit rate. Most of the rest
> show up in the 12 and 24mbit zone with a tiny fraction of them using the
> higher MCS rates.
>
> Trying to couple the radiotap info with the packet decryption to discover
> the sources of those low-bit rate packets is where I've been running into
> difficulty. I can see the what but I haven't had much luck on the why.
>
> I totally agree with you that tools other than wireshark for analyzing
> this seem to be non-existent.
Using the following filter in Wireshark should get you all that 6Mbps
traffic:
radiotap.datarate == 6
Then it's pretty easy to dig into what those are (by wifi frame-type, at
least). At my network, that's mostly broadcast traffic (AP beacons and
whatnot), as the corporate wifi has been set to use that rate as the
broadcast rate.
without capturing the WPA exchange, the contents of the data frames can't
be seen, of course.
-Aaron
[-- Attachment #2: Type: text/html, Size: 1543 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 17:15 ` Aaron Wood
@ 2015-03-27 18:13 ` Richard Smith
0 siblings, 0 replies; 16+ messages in thread
From: Richard Smith @ 2015-03-27 18:13 UTC (permalink / raw)
To: Aaron Wood, Isaac Konikoff; +Cc: codel, cerowrt-devel, bloat
On 03/27/2015 01:15 PM, Aaron Wood wrote:
> capturing in radiotap mode, you get RSSI/noise levels on the 802.11n
> packet, the rates involved, everything. It's very nice for digging into
> issues.
Nod. Done some of that.
> Unfortunately, the next problem is a "can't see the forest for the
> trees", as there are no good high-level analysis tools for captured
> traffic that I've found. Most of the commercial packages seem to offer
> summary stats, but not much more (nothing like airtime utilization over
> time, negotiated rates over time, aggregate/per-station throughput over
> time, etc.)
More nod and that's the sort of thing I'm specifically looking to find.
Out of all of these stations associated which station (or stations) is
chewing up the most RF time.
horst can show you some of this but its per packet and modulation and
not per station. I've been in contact with the author to see about
adding stats per station but he says it's a pretty big effort.
What I'm most likely going to do is use it's output feature to generate
a log and then post process that in python where I can easily aggregate
things on a per MAC address.
--
Richard A. Smith
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 17:21 ` Aaron Wood
@ 2015-03-27 18:13 ` Richard Smith
2015-03-27 19:00 ` Isaac Konikoff
1 sibling, 0 replies; 16+ messages in thread
From: Richard Smith @ 2015-03-27 18:13 UTC (permalink / raw)
To: Aaron Wood; +Cc: bloat, codel, cerowrt-devel
On 03/27/2015 01:21 PM, Aaron Wood wrote:
> Using the following filter in Wireshark should get you all that 6Mbps
> traffic:
>
> radiotap.datarate == 6
Thanks. I'd not discovered that yet although I have so much of it that
finding 6mbit packets is not much of a problem.
> Then it's pretty easy to dig into what those are (by wifi frame-type, at
> least). At my network, that's mostly broadcast traffic (AP beacons and
> whatnot), as the corporate wifi has been set to use that rate as the
> broadcast rate.
Yeah. Beacons are supposed to be that low but that's only every 100ms.
On my network its there are loads of data packets that are sent at 6mbit.
> without capturing the WPA exchange, the contents of the data frames
> can't be seen, of course.
And this is where I seem to stall out. Even when I capture the full WPA
exchange I only have limited success at getting wireshark to decode all
my traffic.
I have more success with using airdecap-ng decoding and then feeding
that to wireshark but there are still times when it can't decode things
and I can't see why. The full WPA exchange is clearly visible in the
packet capture.
--
Richard A. Smith
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 17:21 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
@ 2015-03-27 19:00 ` Isaac Konikoff
2015-03-27 19:23 ` David Lang
` (2 more replies)
1 sibling, 3 replies; 16+ messages in thread
From: Isaac Konikoff @ 2015-03-27 19:00 UTC (permalink / raw)
To: Aaron Wood, Richard Smith; +Cc: codel, cerowrt-devel, bloat
[-- Attachment #1: Type: text/plain, Size: 1841 bytes --]
Thanks for pointing out horst.
I've been trying wireshark io graphs such as:
retry comparison: wlan.fc.retry==0 (line) to wlan.fc.retry==1 (impulse)
beacon delays: wlan.fc.type_subtype==0x08 AVG frame.time_delta_displayed
I've uploaded my pcap files, netperf-wrapper results and lanforge script
reports which have some aggregate graphs below all of the pie charts.
The pcap files with 64sta in the name correspond to the script reports.
candelatech.com/downloads/wifi-reports/trial1
I'll upload more once I try the qdisc suggestions and I'll generate
comparison plots.
Isaac
On 03/27/2015 10:21 AM, Aaron Wood wrote:
>
>
> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith <smithbone@gmail.com
> <mailto:smithbone@gmail.com>> wrote:
>
> Using horst I've discovered that the major reason our WiFi network
> sucks is because 90% of the packets are sent at the 6mbit rate.
> Most of the rest show up in the 12 and 24mbit zone with a tiny
> fraction of them using the higher MCS rates.
>
> Trying to couple the radiotap info with the packet decryption to
> discover the sources of those low-bit rate packets is where I've
> been running into difficulty. I can see the what but I haven't
> had much luck on the why.
>
> I totally agree with you that tools other than wireshark for
> analyzing this seem to be non-existent.
>
>
> Using the following filter in Wireshark should get you all that 6Mbps
> traffic:
>
> radiotap.datarate == 6
>
> Then it's pretty easy to dig into what those are (by wifi frame-type,
> at least). At my network, that's mostly broadcast traffic (AP beacons
> and whatnot), as the corporate wifi has been set to use that rate as
> the broadcast rate.
>
> without capturing the WPA exchange, the contents of the data frames
> can't be seen, of course.
>
> -Aaron
[-- Attachment #2: Type: text/html, Size: 3242 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 19:00 ` Isaac Konikoff
@ 2015-03-27 19:23 ` David Lang
2015-03-27 19:52 ` Richard Smith
2015-03-28 3:31 ` Dave Taht
2 siblings, 0 replies; 16+ messages in thread
From: David Lang @ 2015-03-27 19:23 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, bloat, cerowrt-devel
[-- Attachment #1: Type: TEXT/Plain, Size: 2152 bytes --]
I gathered a bunch of stats from the Scale conference this year
http://lang.hm/scale/2015/stats/
this includes very frequent dumps of transmission speed data per MAC address per
AP
David Lang
On Fri, 27 Mar 2015, Isaac Konikoff wrote:
> Thanks for pointing out horst.
>
> I've been trying wireshark io graphs such as:
> retry comparison: wlan.fc.retry==0 (line) to wlan.fc.retry==1 (impulse)
> beacon delays: wlan.fc.type_subtype==0x08 AVG frame.time_delta_displayed
>
> I've uploaded my pcap files, netperf-wrapper results and lanforge script
> reports which have some aggregate graphs below all of the pie charts. The
> pcap files with 64sta in the name correspond to the script reports.
>
> candelatech.com/downloads/wifi-reports/trial1
>
> I'll upload more once I try the qdisc suggestions and I'll generate
> comparison plots.
>
> Isaac
>
> On 03/27/2015 10:21 AM, Aaron Wood wrote:
>>
>>
>> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith <smithbone@gmail.com
>> <mailto:smithbone@gmail.com>> wrote:
>>
>> Using horst I've discovered that the major reason our WiFi network
>> sucks is because 90% of the packets are sent at the 6mbit rate.
>> Most of the rest show up in the 12 and 24mbit zone with a tiny
>> fraction of them using the higher MCS rates.
>>
>> Trying to couple the radiotap info with the packet decryption to
>> discover the sources of those low-bit rate packets is where I've
>> been running into difficulty. I can see the what but I haven't
>> had much luck on the why.
>>
>> I totally agree with you that tools other than wireshark for
>> analyzing this seem to be non-existent.
>>
>>
>> Using the following filter in Wireshark should get you all that 6Mbps
>> traffic:
>>
>> radiotap.datarate == 6
>>
>> Then it's pretty easy to dig into what those are (by wifi frame-type, at
>> least). At my network, that's mostly broadcast traffic (AP beacons and
>> whatnot), as the corporate wifi has been set to use that rate as the
>> broadcast rate.
>>
>> without capturing the WPA exchange, the contents of the data frames can't
>> be seen, of course.
>>
>> -Aaron
>
>
>
[-- Attachment #2: Type: TEXT/PLAIN, Size: 140 bytes --]
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 19:00 ` Isaac Konikoff
2015-03-27 19:23 ` David Lang
@ 2015-03-27 19:52 ` Richard Smith
2015-03-28 3:31 ` Dave Taht
2 siblings, 0 replies; 16+ messages in thread
From: Richard Smith @ 2015-03-27 19:52 UTC (permalink / raw)
To: Isaac Konikoff, Aaron Wood; +Cc: codel, cerowrt-devel, bloat
On 03/27/2015 03:00 PM, Isaac Konikoff wrote:
> Thanks for pointing out horst.
I should point out that the author has responded to me on some of my bug
reports that its not been completely updated to correctly compute the
channel utilization when 802.11n is in use. Sometimes you will get
percentages that don't add up to 100%. But it's still great for getting
a good feel of what's going on.
The author is busy and email responses sometime have a few days of lag
but he has always responded and been very helpful.
--
Richard A. Smith
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-27 19:00 ` Isaac Konikoff
2015-03-27 19:23 ` David Lang
2015-03-27 19:52 ` Richard Smith
@ 2015-03-28 3:31 ` Dave Taht
2015-03-31 18:55 ` Isaac Konikoff
2 siblings, 1 reply; 16+ messages in thread
From: Dave Taht @ 2015-03-28 3:31 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, bloat, cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 4596 bytes --]
wonderful dataset isaac! A lot to learn there and quite a bit I can
explain, which might take me days to do with graphs and the like.
But it's late, and unless you are planning on doing another test run I will
defer.
It is mildly easier to look at this stuff in bulk, so I did a wget -l 1- m
http://candelatech.com/downloads/wifi-reports/trial1/ on the data.
Quick top level notes rather than write a massive blog with graph entry....
-1) These are totally artificial tests, stressing out queue management.
There are no
winners, or losers per se', only data. Someday we can get to winners and
losers,
but we have a zillion interrelated variables to isolate and fix first. So
consider this data a *baseline* for what wifi - at the highest rate
possible - looks like today - and I'd dearly like some results that are
below mcs4 on average also as a baseline....
Typical wifi traffic looks nothing like rrul, for example. rrul vs rrul_be
is useful for showing how badly 802.11e queues actually work today, however.
0) Pretty hard to get close to the underlying capability of the mac, isn't
it? Plenty of problems besides queue management could exist, including
running out of cpu....
1) SFQ has a default packet limit of 128 packets which does not appear to
be enough at these speeds. Bump it to 1000 for a more direct comparison to
the other qdiscs.
You will note a rather big difference in cwnd on your packet captures, and
bandwidth usage more similar to pfifo_fast. I would expect, anyway.
2) I have generally felt that txops needed more of a "packing" approach to
wedging packets into a txop rather than a pure sfq or drr approach, as
losses tend to be bursty, and maximizing the number of flows in a txop a
goodness. SFQ packs better than DRR.
That said there are so many compensation stuff (like retries) getting in
the way right now...
3) The SFQ results being better than the fq_codel results in several cases
are also due in part to an interaction of the drr quantum and a not high
enough target to compensate for wifi jitter.
But in looking at SFQ you can't point to a lower latency and say that's
"better" when you also have a much lower achieved bandwidth.
So I would appreciate a run where the stations had a fq_codel quantum 300
and target 30ms. APs, on the other hand, would be better a larger
(incalculable, but say 4500) quantum, a similar target, and a per dst
filter rather than the full 5 tuple.
On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff <konikofi@candelatech.com>
wrote:
> Thanks for pointing out horst.
>
> I've been trying wireshark io graphs such as:
> retry comparison: wlan.fc.retry==0 (line) to wlan.fc.retry==1 (impulse)
> beacon delays: wlan.fc.type_subtype==0x08 AVG frame.time_delta_displayed
>
> I've uploaded my pcap files, netperf-wrapper results and lanforge script
> reports which have some aggregate graphs below all of the pie charts. The
> pcap files with 64sta in the name correspond to the script reports.
>
> candelatech.com/downloads/wifi-reports/trial1
>
> I'll upload more once I try the qdisc suggestions and I'll generate
> comparison plots.
>
> Isaac
>
>
> On 03/27/2015 10:21 AM, Aaron Wood wrote:
>
>
>
> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith <smithbone@gmail.com>
> wrote:
>
>> Using horst I've discovered that the major reason our WiFi network sucks
>> is because 90% of the packets are sent at the 6mbit rate. Most of the rest
>> show up in the 12 and 24mbit zone with a tiny fraction of them using the
>> higher MCS rates.
>>
>> Trying to couple the radiotap info with the packet decryption to discover
>> the sources of those low-bit rate packets is where I've been running into
>> difficulty. I can see the what but I haven't had much luck on the why.
>>
>> I totally agree with you that tools other than wireshark for analyzing
>> this seem to be non-existent.
>
>
> Using the following filter in Wireshark should get you all that 6Mbps
> traffic:
>
> radiotap.datarate == 6
>
> Then it's pretty easy to dig into what those are (by wifi frame-type, at
> least). At my network, that's mostly broadcast traffic (AP beacons and
> whatnot), as the corporate wifi has been set to use that rate as the
> broadcast rate.
>
> without capturing the WPA exchange, the contents of the data frames
> can't be seen, of course.
>
> -Aaron
>
>
>
>
--
Dave Täht
Let's make wifi fast, less jittery and reliable again!
https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
[-- Attachment #2: Type: text/html, Size: 6817 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-28 3:31 ` Dave Taht
@ 2015-03-31 18:55 ` Isaac Konikoff
2015-04-01 15:13 ` Dave Taht
0 siblings, 1 reply; 16+ messages in thread
From: Isaac Konikoff @ 2015-03-31 18:55 UTC (permalink / raw)
To: Dave Taht; +Cc: codel, bloat, cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 5922 bytes --]
Thanks for the feedback...I've been trying out the following based on
debloat.sh:
The ath10k access point has two interfaces for these tests:
1. virtual access point - vap1
tc qdisc add dev vap1 handle 1 root mq
tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500 noecn
tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500
tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500
tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500 noecn
2. ethernet - eth1
tc qdisc add dev eth1 root fq_codel
For the netperf-wrapper tests, the 4 stations in use:
tc qdisc add dev sta101 root fq_codel target 30ms quantum 300
tc qdisc add dev sta102 root fq_codel target 30ms quantum 300
tc qdisc add dev sta103 root fq_codel target 30ms quantum 300
tc qdisc add dev sta104 root fq_codel target 30ms quantum 300
I'm planning to re-run with these settings and then again at a lower mcs.
On 03/27/2015 08:31 PM, Dave Taht wrote:
> wonderful dataset isaac! A lot to learn there and quite a bit I can
> explain, which might take me days to do with graphs and the like.
>
> But it's late, and unless you are planning on doing another test run I
> will defer.
>
> It is mildly easier to look at this stuff in bulk, so I did a wget -l
> 1- m http://candelatech.com/downloads/wifi-reports/trial1/ on the data.
>
> Quick top level notes rather than write a massive blog with graph
> entry....
>
> -1) These are totally artificial tests, stressing out queue
> management. There are no
> winners, or losers per se', only data. Someday we can get to winners
> and losers,
> but we have a zillion interrelated variables to isolate and fix first.
> So consider this data a *baseline* for what wifi - at the highest rate
> possible - looks like today - and I'd dearly like some results that
> are below mcs4 on average also as a baseline....
>
> Typical wifi traffic looks nothing like rrul, for example. rrul vs
> rrul_be is useful for showing how badly 802.11e queues actually work
> today, however.
>
> 0) Pretty hard to get close to the underlying capability of the mac,
> isn't it? Plenty of problems besides queue management could exist,
> including running out of cpu....
>
> 1) SFQ has a default packet limit of 128 packets which does not appear
> to be enough at these speeds. Bump it to 1000 for a more direct
> comparison to the other qdiscs.
>
> You will note a rather big difference in cwnd on your packet captures,
> and bandwidth usage more similar to pfifo_fast. I would expect, anyway.
>
> 2) I have generally felt that txops needed more of a "packing"
> approach to wedging packets into a txop rather than a pure sfq or drr
> approach, as losses tend to be bursty, and maximizing the number of
> flows in a txop a goodness. SFQ packs better than DRR.
>
> That said there are so many compensation stuff (like retries) getting
> in the way right now...
>
> 3) The SFQ results being better than the fq_codel results in several
> cases are also due in part to an interaction of the drr quantum and a
> not high enough target to compensate for wifi jitter.
>
> But in looking at SFQ you can't point to a lower latency and say
> that's "better" when you also have a much lower achieved bandwidth.
>
> So I would appreciate a run where the stations had a fq_codel quantum
> 300 and target 30ms. APs, on the other hand, would be better a larger
> (incalculable, but say 4500) quantum, a similar target, and a per dst
> filter rather than the full 5 tuple.
>
>
>
> On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff
> <konikofi@candelatech.com <mailto:konikofi@candelatech.com>> wrote:
>
> Thanks for pointing out horst.
>
> I've been trying wireshark io graphs such as:
> retry comparison: wlan.fc.retry==0 (line) to wlan.fc.retry==1
> (impulse)
> beacon delays: wlan.fc.type_subtype==0x08 AVG
> frame.time_delta_displayed
>
> I've uploaded my pcap files, netperf-wrapper results and lanforge
> script reports which have some aggregate graphs below all of the
> pie charts. The pcap files with 64sta in the name correspond to
> the script reports.
>
> candelatech.com/downloads/wifi-reports/trial1
> <http://candelatech.com/downloads/wifi-reports/trial1>
>
> I'll upload more once I try the qdisc suggestions and I'll
> generate comparison plots.
>
> Isaac
>
>
> On 03/27/2015 10:21 AM, Aaron Wood wrote:
>>
>>
>> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith
>> <smithbone@gmail.com <mailto:smithbone@gmail.com>> wrote:
>>
>> Using horst I've discovered that the major reason our WiFi
>> network sucks is because 90% of the packets are sent at the
>> 6mbit rate. Most of the rest show up in the 12 and 24mbit
>> zone with a tiny fraction of them using the higher MCS rates.
>>
>> Trying to couple the radiotap info with the packet decryption
>> to discover the sources of those low-bit rate packets is
>> where I've been running into difficulty. I can see the what
>> but I haven't had much luck on the why.
>>
>> I totally agree with you that tools other than wireshark for
>> analyzing this seem to be non-existent.
>>
>>
>> Using the following filter in Wireshark should get you all that
>> 6Mbps traffic:
>>
>> radiotap.datarate == 6
>>
>> Then it's pretty easy to dig into what those are (by wifi
>> frame-type, at least). At my network, that's mostly broadcast
>> traffic (AP beacons and whatnot), as the corporate wifi has been
>> set to use that rate as the broadcast rate.
>>
>> without capturing the WPA exchange, the contents of the data
>> frames can't be seen, of course.
>>
>> -Aaron
>
>
>
>
>
> --
> Dave Täht
> Let's make wifi fast, less jittery and reliable again!
>
> https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
[-- Attachment #2: Type: text/html, Size: 11001 bytes --]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
2015-03-31 18:55 ` Isaac Konikoff
@ 2015-04-01 15:13 ` Dave Taht
0 siblings, 0 replies; 16+ messages in thread
From: Dave Taht @ 2015-04-01 15:13 UTC (permalink / raw)
To: Isaac Konikoff; +Cc: codel, bloat, cerowrt-devel
Dear Isaac:
The core part you missed here: is for the APs only, in that you want a
per-dst filter in place for those to improve aggregation in addition
to the increased quantum.
I note that you are now... in a few short weeks - ahead of me on
testing wifi! I had put down this line of inquiry figuring that we´d
get around to native per station queues in the driver a year ago and
focused on fixing the uplinks (and raising money)
anyway, I just dug up what looks to be the right filter, but I could
use a sanity test on it, because try as I might I can´t seem to get
any statistics back from tc -s filter (example)
tc -s filter show dev gw10 parent 802:
filter protocol all pref 97 flow
filter protocol all pref 97 flow handle 0x3 hash keys dst divisor 1024
baseclass 802:1
which may mean it is not being installed right (maybe needs to be
attached to 1:1,2,3,4), and I am away from my lab til thursday, with
my darn test driver box stuck behind nat..... (if someone could run a
few rrul and rtt_fair tests through cerowrt with this filter in place
or not against a locally fast host, that would be great)
I stuck the hacky wifi-ap-only-script up at:
http://snapon.lab.bufferbloat.net/~d/debloat_ap.sh
(usage: IFACE=whatever ./debloat_ap.sh)
or, more simply
#!/bin/sh
IFACE=vap1
QDISC=fq_codel
FQ_OPTS="quantum 4542 target 30ms interval 300ms" # 1514*3
wifi() {
tc qdisc add dev $IFACE handle 1 root mq
tc qdisc add dev $IFACE parent 1:1 handle 801 $QDISC $FQ_OPTS noecn
tc qdisc add dev $IFACE parent 1:2 handle 802 $QDISC $FQ_OPTS
tc qdisc add dev $IFACE parent 1:3 handle 803 $QDISC $FQ_OPTS
tc qdisc add dev $IFACE parent 1:4 handle 804 $QDISC $FQ_OPTS noecn
# switch to a per dest figure
tc filter add dev $IFACE parent 801: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
tc filter add dev $IFACE parent 802: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
tc filter add dev $IFACE parent 803: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
tc filter add dev $IFACE parent 804: handle 3 protocol all
prio 97 flow hash keys dst divisor 1024
}
wifi
On Tue, Mar 31, 2015 at 11:55 AM, Isaac Konikoff
<konikofi@candelatech.com> wrote:
> Thanks for the feedback...I've been trying out the following based on
> debloat.sh:
>
> The ath10k access point has two interfaces for these tests:
> 1. virtual access point - vap1
> tc qdisc add dev vap1 handle 1 root mq
> tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500 noecn
> tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500
> tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500
> tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500 noecn
>
> 2. ethernet - eth1
> tc qdisc add dev eth1 root fq_codel
>
> For the netperf-wrapper tests, the 4 stations in use:
> tc qdisc add dev sta101 root fq_codel target 30ms quantum 300
> tc qdisc add dev sta102 root fq_codel target 30ms quantum 300
> tc qdisc add dev sta103 root fq_codel target 30ms quantum 300
> tc qdisc add dev sta104 root fq_codel target 30ms quantum 300
>
> I'm planning to re-run with these settings and then again at a lower mcs.
>
>
>
>
> On 03/27/2015 08:31 PM, Dave Taht wrote:
>
> wonderful dataset isaac! A lot to learn there and quite a bit I can explain,
> which might take me days to do with graphs and the like.
>
> But it's late, and unless you are planning on doing another test run I will
> defer.
>
> It is mildly easier to look at this stuff in bulk, so I did a wget -l 1- m
> http://candelatech.com/downloads/wifi-reports/trial1/ on the data.
>
> Quick top level notes rather than write a massive blog with graph entry....
>
> -1) These are totally artificial tests, stressing out queue management.
> There are no
> winners, or losers per se', only data. Someday we can get to winners and
> losers,
> but we have a zillion interrelated variables to isolate and fix first. So
> consider this data a *baseline* for what wifi - at the highest rate possible
> - looks like today - and I'd dearly like some results that are below mcs4 on
> average also as a baseline....
>
> Typical wifi traffic looks nothing like rrul, for example. rrul vs rrul_be
> is useful for showing how badly 802.11e queues actually work today, however.
>
> 0) Pretty hard to get close to the underlying capability of the mac, isn't
> it? Plenty of problems besides queue management could exist, including
> running out of cpu....
>
> 1) SFQ has a default packet limit of 128 packets which does not appear to be
> enough at these speeds. Bump it to 1000 for a more direct comparison to the
> other qdiscs.
>
> You will note a rather big difference in cwnd on your packet captures, and
> bandwidth usage more similar to pfifo_fast. I would expect, anyway.
>
> 2) I have generally felt that txops needed more of a "packing" approach to
> wedging packets into a txop rather than a pure sfq or drr approach, as
> losses tend to be bursty, and maximizing the number of flows in a txop a
> goodness. SFQ packs better than DRR.
>
> That said there are so many compensation stuff (like retries) getting in the
> way right now...
>
> 3) The SFQ results being better than the fq_codel results in several cases
> are also due in part to an interaction of the drr quantum and a not high
> enough target to compensate for wifi jitter.
>
> But in looking at SFQ you can't point to a lower latency and say that's
> "better" when you also have a much lower achieved bandwidth.
>
> So I would appreciate a run where the stations had a fq_codel quantum 300
> and target 30ms. APs, on the other hand, would be better a larger
> (incalculable, but say 4500) quantum, a similar target, and a per dst filter
> rather than the full 5 tuple.
>
>
>
> On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff <konikofi@candelatech.com>
> wrote:
>>
>> Thanks for pointing out horst.
>>
>> I've been trying wireshark io graphs such as:
>> retry comparison: wlan.fc.retry==0 (line) to wlan.fc.retry==1 (impulse)
>> beacon delays: wlan.fc.type_subtype==0x08 AVG frame.time_delta_displayed
>>
>> I've uploaded my pcap files, netperf-wrapper results and lanforge script
>> reports which have some aggregate graphs below all of the pie charts. The
>> pcap files with 64sta in the name correspond to the script reports.
>>
>> candelatech.com/downloads/wifi-reports/trial1
>>
>> I'll upload more once I try the qdisc suggestions and I'll generate
>> comparison plots.
>>
>> Isaac
>>
>>
>> On 03/27/2015 10:21 AM, Aaron Wood wrote:
>>
>>
>>
>> On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith <smithbone@gmail.com>
>> wrote:
>>>
>>> Using horst I've discovered that the major reason our WiFi network sucks
>>> is because 90% of the packets are sent at the 6mbit rate. Most of the rest
>>> show up in the 12 and 24mbit zone with a tiny fraction of them using the
>>> higher MCS rates.
>>>
>>> Trying to couple the radiotap info with the packet decryption to discover
>>> the sources of those low-bit rate packets is where I've been running into
>>> difficulty. I can see the what but I haven't had much luck on the why.
>>>
>>> I totally agree with you that tools other than wireshark for analyzing
>>> this seem to be non-existent.
>>
>>
>> Using the following filter in Wireshark should get you all that 6Mbps
>> traffic:
>>
>> radiotap.datarate == 6
>>
>> Then it's pretty easy to dig into what those are (by wifi frame-type, at
>> least). At my network, that's mostly broadcast traffic (AP beacons and
>> whatnot), as the corporate wifi has been set to use that rate as the
>> broadcast rate.
>>
>> without capturing the WPA exchange, the contents of the data frames can't
>> be seen, of course.
>>
>> -Aaron
>>
>>
>>
>
>
>
> --
> Dave Täht
> Let's make wifi fast, less jittery and reliable again!
>
> https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
>
>
>
--
Dave Täht
Let's make wifi fast, less jittery and reliable again!
https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2015-04-01 15:13 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-26 21:39 [Cerowrt-devel] capturing packets and applying qdiscs Isaac Konikoff
2015-03-27 0:39 ` [Cerowrt-devel] [Bloat] " David Lang
2015-03-27 16:38 ` Isaac Konikoff
2015-03-27 17:15 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
2015-03-27 1:19 ` Dave Taht
2015-03-27 1:37 ` Dave Taht
2015-03-27 15:08 ` Richard Smith
2015-03-27 17:21 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
2015-03-27 19:00 ` Isaac Konikoff
2015-03-27 19:23 ` David Lang
2015-03-27 19:52 ` Richard Smith
2015-03-28 3:31 ` Dave Taht
2015-03-31 18:55 ` Isaac Konikoff
2015-04-01 15:13 ` Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox