<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<font size="-1" face="Times New Roman, Times, serif">Thanks for
responding, Bob. Everything helps! :)<br>
</font>
<div class="moz-signature">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div><font size="-1" face="Times New Roman, Times, serif">===========<br>
Tim</font></div>
</div>
<div class="moz-cite-prefix">On 12/16/2019 4:05 PM, Bob McMahon
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAHb6LvouJ0rYgrz0oNQr_xb8-ehjwaR83vdWxJNc2soyMi2RCA@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr"><a href="https://sourceforge.net/projects/iperf2/"
moz-do-not-send="true">iperf 2.0.13 and iperf 2.0.14a</a>
(currently in development) have support for end/end or write to
read latencies in both mean/min/max/stdev and histogram formats.
It does require realtime clock sync which can be done if a few
ways. There is also support for clock_nanosleep() based burst
scheduling.<br>
<br>
We use programmable attenuators to affect the distances and
programmable phase shifter to affect the channel mixing or MIMO
mixing. <br>
<br>
Don't know if any of this helps or not.<br>
<br>
Bob</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, Dec 16, 2019 at 12:54
PM Tim Higgins <<a href="mailto:tim@smallnetbuilder.com"
moz-do-not-send="true">tim@smallnetbuilder.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> <br>
<div>On 12/16/2019 12:59 PM, Toke Høiland-Jørgensen wrote:<br>
</div>
<blockquote type="cite">
<pre>Tim Higgins <a href="mailto:tim@smallnetbuilder.com" target="_blank" moz-do-not-send="true"><tim@smallnetbuilder.com></a> writes:
</pre>
<blockquote type="cite">
<pre>Hi all,
Dave Täht suggested that I post the discussion we've started to this broader
group.
I've been spending the past few months trying to develop methods to verify one
of the key promises of OFDMA; improved efficiency.
The tests have mostly focused on trying to see improvement in total throughput
using various traffic mixes using four OFDMA STAs.
I've been using Samsung S10e's as STAs and primarily iperf3 TCP/IP and UDP
traffic. I did some work with the Intel AX200 as a STA using both Windows 10
and Linux for RvR testing and found the Linux driver basically broken for
uplink. (See the Win10/Linux comparison in the RAX40 section of
<a href="https://www.smallnetbuilder.com/wireless/wireless-reviews/33220-wi-fi-6-performance-roundup-five-routers-tested?start=1" target="_blank" moz-do-not-send="true">https://www.smallnetbuilder.com/wireless/wireless-reviews/33220-wi-fi-6-performance-roundup-five-routers-tested?start=1</a>
</pre>
</blockquote>
<pre>FWIW, Johannes was debugging some TCP issues on Intel 802.11ax the other
day, and was getting ~1.4Gbps of throughput:
<a href="https://lore.kernel.org/linux-wireless/90485ecbfa2a13c4438b840c8a9d37677e833ea5.camel@sipsolutions.net/T/" target="_blank" moz-do-not-send="true">https://lore.kernel.org/linux-wireless/90485ecbfa2a13c4438b840c8a9d37677e833ea5.camel@sipsolutions.net/T/</a>
So I guess maybe there are improvements coming in that space?
</pre>
</blockquote>
TH: Yes, I'm monitoring that thread. I'm about to try a
5.2.14 kernel with this patch<br>
<a href="https://patchwork.kernel.org/patch/11253471/"
target="_blank" moz-do-not-send="true">https://patchwork.kernel.org/patch/11253471/</a><br>
<br>
I think there are others in the works. Hope the end product
will be available in a backport.<br>
<blockquote type="cite">
<blockquote type="cite">
<pre>So, for now, I'm limited to using the Samsung S10 as STAs.
ANYWAY, I haven't been having much luck finding total throughput gains, so
thought I'd bang my head against a different wall for awhile, which brings me
to latency.
My initial work was pretty simple, just running pings to four OFDMA STAs with
OFDMA on/off on the AP, which showed no improvement. That's once I realized
the large ping times and variation I was seeing initially was due to
aggressive power-save kicking in on the STAs with no traffic running. So I
also tried various TCP rates starting at 1 Mbps per STA to keep the STA awake.
Coincidentally, Dave reached out the other day and suggested I look at the
toolsets used for the make-wifi-fast project.
I've spent a few hours looking at the flent and rrul sites and I'm interested
in exploring using the tools/techniques used for the make-wifi-fast work to
date to see if AX adds anything to the latency improvement party. If anyone
is willing to provide some pointers on the proper use of the tools, I'd
appreciate it.
</pre>
</blockquote>
<pre>I think the Flent batch file used to run the tests are part of the data
file at the bottom of this page:
<a href="https://www.cs.kau.se/tohojo/airtime-fairness/" target="_blank" moz-do-not-send="true">https://www.cs.kau.se/tohojo/airtime-fairness/</a>
</pre>
</blockquote>
TH: Thanks for the reference, I'll look into that<br>
<blockquote type="cite">
<pre>The setup I was using had a server that ran the tests, which was one
Ethernet hop from the AP. The clients were passive, run running
'netserver' so the server could run 'netperf' against each of them. This
flips up/down in the tests but otherwise works fairly well. I used a
separate (wired) control network for telling the clients to
connect/disconnect...
More details about the setup here:
<a href="https://blog.tohojo.dk/2017/11/building-a-wireless-testbed-with-wires.html" target="_blank" moz-do-not-send="true">https://blog.tohojo.dk/2017/11/building-a-wireless-testbed-with-wires.html</a>
</pre>
</blockquote>
TH: Again, thanks.<br>
<blockquote type="cite">
<blockquote type="cite">
<pre>For example, I didn't see mention of the bitrates used for the traffic streams
in the tests. Do I just tell each stream to run full blast (1 Gbps)?
</pre>
</blockquote>
<pre>Well for TCP tests, yeah. The only UDP tests I did were flood tests,
where I just had iperf blasting away at way above the link rate, then
measured how many packets made it through.
</pre>
</blockquote>
TH: Got it. Blast away on both TCP and UDP. Not sure how
that will work with OFDMA trying to split that bandwidth
into multiple RUs that have smaller bandwidth, but guess
I'll find out.<br>
<blockquote type="cite">
<blockquote type="cite">
<pre>Also, since most implementations of (consumer at least) OFDMA require multiple
STAs to trigger OFDMA frames, I could use some help understanding whether
multiple streams should be applied per STA, or spread among the 4 STAs I'm
using in my testing.
</pre>
</blockquote>
<pre>Why not just try both and see what works? :)
</pre>
</blockquote>
TH: Why didn't I think of that? :) OK.<br>
<blockquote type="cite">
<blockquote type="cite">
<pre>Also (2), has anyone used Android STAs for make-wifi-fast testing?
</pre>
</blockquote>
<pre>Nope. But if you can get netperf cross-compiled it should be simple
enough to run 'netserver' on them, I would think?</pre>
</blockquote>
TH: Unfortunately, that requires more skills than I have.
Maybe someone else on this list has already done it?<br>
</div>
_______________________________________________<br>
Make-wifi-fast mailing list<br>
<a href="mailto:Make-wifi-fast@lists.bufferbloat.net"
target="_blank" moz-do-not-send="true">Make-wifi-fast@lists.bufferbloat.net</a><br>
<a
href="https://lists.bufferbloat.net/listinfo/make-wifi-fast"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.bufferbloat.net/listinfo/make-wifi-fast</a></blockquote>
</div>
</blockquote>
<br>
</body>
</html>