Hi all,_______________________________________________
I finally have my testbed working the way I want and am starting to run tests to see if OFDMA does anything useful.
This will all be covered in detail in an upcoming SmallNetBuilder article. But I wanted to sanity check something with this esteemed group.
The tests are basically the flent rtt_fair_var up and down tests ported to the octoScope platform I use for WiFi testing.
The initial work was done on flent, with a lot of hand-holding from Toke. (Thank you, Toke!)
Using 4 Intel AX200 STAs on Win10. iperf3 is running traffic using TCP/IP with unthrottled bandwidth. I've taken Bjørn's idea and have each STA using a different DSCP priority level, but with TCP/IP traffic, not UDP. I'm sticking to using CS0-7 equivalents and confirmed that the iperf3 --dscp values properly translate to the intended WiFi priority levels. Each STA has a different priority, either CS0,3,5 or 6 (best effort, excellent effort, video and voice).
Ping is used to measure latency and always runs from AP to STA. Only TCP/IP traffic direction is reversed between the down and uplink tests.
One thing that jumps out immediately is that uplink latencies are *much* lower than downlink, with either OFDMA on or off. Attached are three examples. The CDFs are average latency of the 4 STAs.
The NETGEAR R7800 is a 4x4 AC Qualcomm-based. I'm using this as a baseline product.
The NETGEAR RAX15 is 2x2 AX Broadcom-based. You can see what I mean when I say OFDMA doesn't help.
Does this much difference between up and downlink latency pass the sniff test?
===
Tim
Make-wifi-fast mailing list
Make-wifi-fast@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast