From: David Lang <david@lang.hm>
To: Isaac Konikoff <konikofi@candelatech.com>
Cc: codel <codel@lists.bufferbloat.net>,
cerowrt-devel <cerowrt-devel@lists.bufferbloat.net>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs
Date: Thu, 26 Mar 2015 17:39:23 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.02.1503261738130.23943@nftneq.ynat.uz> (raw)
In-Reply-To: <55147C8A.4030804@candelatech.com>
On Thu, 26 Mar 2015, Isaac Konikoff wrote:
> Hi All,
>
> Looking for some feedback in my test setup...
>
> Can you please review my setup and let me know how to improve my application
> of the qdiscs? I've been applying manually, but I'm not sure that is the best
> method, or if the values really make sense. Sorry if this has been covered ad
> nauseum in codel or bloat threads over the past 4+ years...
>
> I've been capturing packets on a dedicated monitor box using the following
> method:
>
> tshark -i moni1 -w <file>
>
> where moni1 is ath9k on channel 149 (5745 MHz), width: 40 MHz, center1: 5755
> MHz
>
> The system under test is a lanforge ath10k ap being driven by another
> lanforge system using ath9k clients to associate and run traffic tests.
>
> The two traffic tests I'm running are:
>
> 1. netperf-wrapper batch consisting of: tcp_download, tcp_upload,
> tcp_bidirectional, rrul, rrul_be and rtt_fair4be on 4 sta's.
>
> 2. lanforge wifi capacity test using tcp-download incrementing 4 sta's per
> minute up to 64 sta's with each iteration attempting 500Mbps download per x
> number of sta's.
what results are you getting? and what results are you hoping to get to?
David Lang
> The qdiscs I am using are applied to the virtual ap interface which is the
> egress interface for download tests. I also applied the same qdisc to the
> ap's eth1 for the few upload tests. Is this sane?
>
> qdiscs used, deleting each before trying the next:
> 1. default pfifo_fast
> 2. tc qdisc add dev vap1 root fq_codel
> 3. tc qdisc add dev vap1 root fq_codel target 5ms interval 100ms noecn
> 4. tc qdisc add dev vap1 root fq_codel limit 2000 target 3ms interval 40ms
> noecn
>
> Any suggestions you have would be helpful.
>
> Thanks,
> Isaac
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
next prev parent reply other threads:[~2015-03-27 0:39 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-26 21:39 [Cerowrt-devel] " Isaac Konikoff
2015-03-27 0:39 ` David Lang [this message]
2015-03-27 16:38 ` [Cerowrt-devel] [Bloat] " Isaac Konikoff
2015-03-27 17:15 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
2015-03-27 1:19 ` Dave Taht
2015-03-27 1:37 ` Dave Taht
2015-03-27 15:08 ` Richard Smith
2015-03-27 17:21 ` Aaron Wood
2015-03-27 18:13 ` Richard Smith
2015-03-27 19:00 ` Isaac Konikoff
2015-03-27 19:23 ` David Lang
2015-03-27 19:52 ` Richard Smith
2015-03-28 3:31 ` Dave Taht
2015-03-31 18:55 ` Isaac Konikoff
2015-04-01 15:13 ` Dave Taht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.02.1503261738130.23943@nftneq.ynat.uz \
--to=david@lang.hm \
--cc=bloat@lists.bufferbloat.net \
--cc=cerowrt-devel@lists.bufferbloat.net \
--cc=codel@lists.bufferbloat.net \
--cc=konikofi@candelatech.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox