[Starlink] RFC: Latency test case text and example report.
Ben Greear
greearb at candelatech.com
Tue Sep 13 11:58:14 EDT 2022
On 9/13/22 8:39 AM, Dave Taht wrote:
> hey, ben, I'm curious if this test made it into TR398? Is it possible
> to setup some of this or parts of TR398 to run over starlink?
>
> I'm also curious as to if any commercial ax APs were testing out
> better than when you tested about this time last year. I've just gone
> through 9 months of pure hell getting openwrt's implementation of the
> mt76 and ath10k to multiplex a lot better, and making some forward
> progress again (
> https://forum.openwrt.org/t/aql-and-the-ath10k-is-lovely/59002/830 )
> and along the way ran into new problems with location scanning and
> apple's airdrop....
>
> but I just got a batch of dismal results back from the ax210 and
> mt79... tell me that there's an AP shipping from someone that scales a
> bit better? Lie if you must...
An mtk7915 based AP that is running recent owrt did better than others.
http://www.candelatech.com/examples/TR-398v2-2022-06-05-08-28-57-6.2.6-latency-virt-sta-new-atf-c/
The test was at least tentatively accepted into tr398v3, but I don't think anyone other than ourselves has implemented
or tested it. I think the pass/fail will need to be adjusted to make it easier to pass. Some APs were showing
multiple seconds of latency, so maybe a few hundred MS is really OK.
The test should be able to run over WAN if desired, though it would take a bit
of extra setup to place an upstream LANforge endpoint on a cloud VM.
If someone at spacex wants to run this test, please contact me off list and we can help
make it happen.
Thanks,
Ben
>
> On Sun, Sep 26, 2021 at 2:59 PM Ben Greear <greearb at candelatech.com> wrote:
>>
>> I have been working on a latency test that I hope can be included in the TR398 issue 3
>> document. It is based somewhat on Toke's paper on buffer bloat and latency testing,
>> with a notable change that I'm doing this on 32 stations in part of the test.
>>
>> I implemented this test case, and an example run against an enterprise grade AX AP
>> is here. There could still be bugs in my implementation, but I think it is at least
>> close to correct:
>>
>> http://www.candelatech.com/examples/tr398v3-latency-report.pdf
>>
>> TLDR: Runs OK with single station, but sees 1+second one-way latency with 32 stations and high load, and UDP often
>> is not able to see any throughput at all, I guess due to too many packets being lost
>> or something. I hope to run against some cutting-edge OpenWRT APs soon.
>>
>> One note on TCP Latency: This is time to transmit a 64k chunk of data over TCP, not a single
>> frame.
>>
>> My testbed used 32 Intel ax210 radios as stations in this test.
>>
>> I am interested in feedback from this list if anyone has opinions.
>>
>> Here is text of the test case:
>>
>> The Latency test intends to verify latency under low, high, and maximum AP traffic load, with
>> 1 and 32 stations. Traffic load is 4 bi-directional TCP streams for each station, plus a
>> low speed UDP connection to probe latency.
>>
>> Test Procedure
>>
>> DUT should be configured for 20Mhz on 2.4Ghz and 80Mhz on 5Ghz and stations should use
>> two spatial streams.
>>
>> 1: For each combination of: 2.4Ghz N, 5Ghz AC, 2.4Ghz AX, 5Ghz AX:
>>
>> 2: Configure attenuators to emulate 2-meter distance between stations and AP.
>>
>> 3: Create 32 stations and allow one to associate with the DUT. The other 31 are admin-down.
>>
>> 4: Create AP to Station (download) TCP stream, and run for 120 seconds, recoard
>> throughput as 'maximum_load'. Stop this connection.
>>
>> 5: Calculate offered_load as 1% of maximum_load.
>>
>> 6: Create 4 TCP streams on each active station, each configured for Upload and Download rate of
>> offered_load / (4 * active_station_count * 2).
>>
>> 6: Create 1 UDP stream on each active station, configured for 56kbps traffic Upload and 56kbps traffic Download.
>>
>> 7: Start all TCP and UDP connections. Wait 30 seconds to let traffic settle.
>>
>> 8: Every 10 seconds for 120 seconds, record one-way download latency over the last 10 seconds for each UDP connection. Depending on test
>> equipment features, this may mean you need to start/stop the UDP every 10 seconds or clear the UDP connection
>> counters.
>>
>> 9: Calculate offered_load as 70% of maximum_load, and repeat steps 6 - 9 inclusive.
>>
>> 10: Calculate offered_load as 125% of maximum_load, and repeat steps 6 - 9 inclusive.
>>
>> 11: Allow the other 31 stations to associate, and repeat steps 5 - 11 inclusive with all 32 stations active.
>>
>>
>> Pass/Fail Criteria
>>
>> 1: For each test configuration running at 1% of maximum load: Average of all UDP latency samples must be less than 10ms.
>> 2: For each test configuration running at 1% of maximum load: Maximum of all UDP latency samples must be less than 20ms.
>> 3: For each test configuration running at 70% of maximum load: Average of all UDP latency samples must be less than 20ms.
>> 4: For each test configuration running at 70% of maximum load: Maximum of all UDP latency samples must be less than 40ms.
>> 5: For each test configuration running at 125% of maximum load: Average of all UDP latency samples must be less than 50ms.
>> 6: For each test configuration running at 125% of maximum load: Maximum of all UDP latency samples must be less than 100ms.
>> 7: For each test configuration: Each UDP connection upload throughput must be at least 1/2 of requested UDP speed for final 10-second test interval.
>> 8: For each test configuration: Each UDP connection download throughput must be at least 1/2 of requested UDP speed for final 10-second test interval.
>>
>>
>> --
>> Ben Greear <greearb at candelatech.com>
>> Candela Technologies Inc http://www.candelatech.com
>> _______________________________________________
>> Starlink mailing list
>> Starlink at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
>
>
>
--
Ben Greear <greearb at candelatech.com>
Candela Technologies Inc http://www.candelatech.com
More information about the Starlink
mailing list