Lets make wifi fast again!
 help / color / mirror / Atom feed
From: Dave Taht <dave.taht@gmail.com>
To: Ben Greear <greearb@candelatech.com>, rpm@lists.bufferbloat.net
Cc: starlink@lists.bufferbloat.net,
	 Make-Wifi-fast <make-wifi-fast@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] [Starlink] RFC: Latency test case text and example report.
Date: Sun, 26 Sep 2021 15:23:46 -0700	[thread overview]
Message-ID: <CAA93jw4CwVHOhow07mCV+rzV2WiUZw4fQdcfY7R81fgmoW8gPA@mail.gmail.com> (raw)
In-Reply-To: <e9d000cf-14de-ed43-f604-72b02d367eb4@candelatech.com>

Thx ben. Why is it we get the most work done on bloat on the weekends?

Adding in the rpm (mostly apple at this point), folk. Their test
shipped last week as part of ios15 and related and is documented here:

https://support.apple.com/en-us/HT212313

I am glad to hear more folk are working on extending tr398. The
numbers you are reporting are at least, better, than with what we were
getting from the
ath10k 5+ years ago, before we reworked the stack. See the 100 station
test here:

https://blog.linuxplumbersconf.org/2016/ocw/system/presentations/3963/original/linuxplumbers_wifi_latency-3Nov.pdf

And I'd hoped that a gang scheduler could be applied on top of that
work to take advantage of the new features in wifi 6.

That said, I don't have any reports of ofdma or du working at all,
from anyone, at this point.

On Sun, Sep 26, 2021 at 2:59 PM Ben Greear <greearb@candelatech.com> wrote:
>
> I have been working on a latency test that I hope can be included in the TR398 issue 3
> document.  It is based somewhat on Toke's paper on buffer bloat and latency testing,
> with a notable change that I'm doing this on 32 stations in part of the test.
>
> I implemented this test case, and an example run against an enterprise grade AX AP
> is here.  There could still be bugs in my implementation, but I think it is at least
> close to correct:
>
> http://www.candelatech.com/examples/tr398v3-latency-report.pdf
>
> TLDR:  Runs OK with single station, but sees 1+second one-way latency with 32 stations and high load, and UDP often
>    is not able to see any throughput at all, I guess due to too many packets being lost
>    or something.  I hope to run against some cutting-edge OpenWRT APs soon.

packet caps are helpful.

>
> One note on TCP Latency:  This is time to transmit a 64k chunk of data over TCP, not a single
> frame.

This number is dependent on the size of the IW as for the minimum
number of round trips required. It's 10 packets in linux, and
recently osx moved from 4 to 10. After that the actual completion time
is governed by loss or marking - and in the case of
truly excessively latencies as you are experiencing tcp tends to send
more packets after a timeout.

(packet caps are helpful)

> My testbed used 32 Intel ax210 radios as stations in this test.
>
> I am interested in feedback from this list if anyone has opinions.

So far as I knew the wifi stack rework and api was now supported by
most of the intel chipsets. AQL was also needed.

please see if you have any "aqm" files: cat /sys/debug/kernel/ieee*/phy*/aqm

>
> Here is text of the test case:
>
> The Latency test intends to verify latency under low, high, and maximum AP traffic load, with
> 1 and 32 stations. Traffic load is 4 bi-directional TCP streams for each station, plus a
> low speed UDP connection to probe latency.
>
> Test Procedure
>
> DUT should be configured for 20Mhz on 2.4Ghz and 80Mhz on 5Ghz and stations should use
> two spatial streams.
>
> 1: For each combination of:  2.4Ghz N, 5Ghz AC, 2.4Ghz AX, 5Ghz AX:
>
> 2: Configure attenuators to emulate 2-meter distance between stations and AP.
>
> 3: Create 32 stations and allow one to associate with the DUT.  The other 31 are admin-down.
>
> 4: Create AP to Station (download) TCP stream, and run for 120 seconds, recoard
>     throughput as 'maximum_load'.  Stop this connection.
>
> 5: Calculate offered_load as 1% of maximum_load.
>
> 6: Create 4 TCP streams on each active station, each configured for Upload and Download rate of
>     offered_load / (4 * active_station_count * 2).
>
> 6: Create 1 UDP stream on each active station, configured for 56kbps traffic Upload and 56kbps traffic Download.
>
> 7: Start all TCP and UDP connections.  Wait 30 seconds to let traffic settle.
>
> 8: Every 10 seconds for 120 seconds, record one-way download latency over the last 10 seconds for each UDP connection.  Depending on test
>     equipment features, this may mean you need to start/stop the UDP every 10 seconds or clear the UDP connection
>     counters.
>
> 9: Calculate offered_load as 70% of maximum_load, and repeat steps 6 - 9 inclusive.
>
> 10: Calculate offered_load as 125% of maximum_load, and repeat steps 6 - 9 inclusive.
>
> 11: Allow the other 31 stations to associate, and repeat steps 5 - 11 inclusive with all 32 stations active.
>
>
> Pass/Fail Criteria
>
> 1: For each test configuration running at 1% of maximum load:  Average of all UDP latency samples must be less than 10ms.
> 2: For each test configuration running at 1% of maximum load:  Maximum of all UDP latency samples must be less than 20ms.
> 3: For each test configuration running at 70% of maximum load:  Average of all UDP latency samples must be less than 20ms.
> 4: For each test configuration running at 70% of maximum load:  Maximum of all UDP latency samples must be less than 40ms.
> 5: For each test configuration running at 125% of maximum load:  Average of all UDP latency samples must be less than 50ms.
> 6: For each test configuration running at 125% of maximum load:  Maximum of all UDP latency samples must be less than 100ms.
> 7: For each test configuration: Each UDP connection upload throughput must be at least 1/2 of requested UDP speed for final 10-second test interval.
> 8: For each test configuration: Each UDP connection download throughput must be at least 1/2 of requested UDP speed for final 10-second test interval.
>
>
> --
> Ben Greear <greearb@candelatech.com>
> Candela Technologies Inc  http://www.candelatech.com
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
Fixing Starlink's Latencies: https://www.youtube.com/watch?v=c9gLo6Xrwgw

Dave Täht CEO, TekLibre, LLC

       reply	other threads:[~2021-09-26 22:24 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <e9d000cf-14de-ed43-f604-72b02d367eb4@candelatech.com>
2021-09-26 22:23 ` Dave Taht [this message]
     [not found] ` <CAA93jw6wCD16dcxaHTM-KRAN8sqpmnpZbNakL_rarQWpHuuaQg@mail.gmail.com>
     [not found]   ` <a489ee1f-061d-a605-66f3-6213a51d3bd5@candelatech.com>
     [not found]     ` <CAA93jw5c=VPo2FGB5sUEqe2sBDv5kE-GSLW45qPcf+-G-=xgow@mail.gmail.com>
     [not found]       ` <95b2dd55-9265-9976-5bd0-52f9a46dd118@candelatech.com>
2022-09-13 18:32         ` Dave Taht
2022-09-13 19:09           ` Ben Greear
2022-09-13 19:25             ` Bob McMahon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/make-wifi-fast.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAA93jw4CwVHOhow07mCV+rzV2WiUZw4fQdcfY7R81fgmoW8gPA@mail.gmail.com \
    --to=dave.taht@gmail.com \
    --cc=greearb@candelatech.com \
    --cc=make-wifi-fast@lists.bufferbloat.net \
    --cc=rpm@lists.bufferbloat.net \
    --cc=starlink@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox