<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<font face="Times New Roman, Times, serif">Thanks for the additional
insights, Bob. How do you measure TCP connects?<br>
<br>
Does Dave or anyone else on the bufferbloat team want to comment
on Bob's comment that latency testing under "heavy traffic" isn't
ideal?<br>
<br>
My impression is that the rtt_fair_var test I used in the article
and other RRUL-related Flent tests fully load the connection under
test. Am I incorrect?<br>
<br>
</font>
<div class="moz-signature">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div><font size="-1" face="Times New Roman, Times, serif">===</font></div>
</div>
<div class="moz-cite-prefix">On 5/15/2020 3:36 PM, Bob McMahon
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAHb6LvqZqeQjdmkvUtg=Qd9RhjyL2=3jxJ-oRmBs1KOXz6tNpQ@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Latency testing under "heavy traffic" isn't ideal.
If the input rate exceeds the service rate of any queue for any
period of time the queue fills up and latency hits a worst case
per that queue depth. I'd say take latency measurements when
the input rates are below the service rates. The measurements
when service rates are less than input rates are less
about latency and more about bloat.<br>
<br>
Also, a good paper is <a
href="https://people.csail.mit.edu/alizadeh/papers/hull-nsdi12.pdf"
moz-do-not-send="true">this one on trading bandwidth for ultra
low latency </a>using phantom queues and ECN.<br>
<br>
Another thing to consider is that network engineers tend to have
a mioptic view of latency. The queueing or delay between the
socket writes/reads and network stack matters too. Network
engineers focus on packets or TCP RTTs and somewhat overlook a
user's true end to end experience. Avoiding bloat by slowing
down the writes, e.g. ECN or different scheduling, still
contributes to end/end latency between the writes() and the
reads() that too few test for and monitor.<br>
<br>
Note: We're moving to trip times of writes to reads (or frames
for video) for our testing. We are also
replacing/supplementing pings with TCP connects as other
"latency related" measurements. TCP connects are more important
than ping.<br>
<br>
Bob</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, May 15, 2020 at 8:20
AM Tim Higgins <<a href="mailto:tim@smallnetbuilder.com"
moz-do-not-send="true">tim@smallnetbuilder.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div> <font size="-1"><font face="Times New Roman, Times,
serif">Hi Bob,<br>
<br>
</font></font>
<div><font size="-1"><font face="Times New Roman, Times,
serif">Thanks for your comments and feedback.
Responses below:<br>
<br>
</font></font> </div>
<div>On 5/14/2020 5:42 PM, Bob McMahon wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Also, forgot to mention, for latency don't
rely on average as most don't care about that. Maybe
use the upper 3 stdev, i.e. the 99.97% point. Our
latency runs will repeat 20 seconds worth of packets and
find that then calculate CDFs of this point in the tail
across hundreds of runs under different conditions. One
"slow packet" is all that it takes to screw up user
experience when it comes to latency. <br>
<br>
</div>
</blockquote>
<font size="-1"><font face="Times New Roman, Times, serif">Thanks
for the guidance.<br>
</font></font>
<blockquote type="cite"><br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, May 14, 2020
at 2:38 PM Bob McMahon <<a
href="mailto:bob.mcmahon@broadcom.com"
target="_blank" moz-do-not-send="true">bob.mcmahon@broadcom.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">I haven't looked closely at OFDMA but
these latency numbers seem way too high for it to
matter. Why is the latency so high? It suggests
there may be queueing delay (bloat) unrelated to
media access.<br>
<br>
Also, one aspect is that OFDMA is replacing EDCA
with AP scheduling per trigger frame. EDCA kinda
sucks per listen before talk which is about 100
microseconds on average which has to be paid even
when no energy detect. This limits the transmits
per second performance to 10K (1/0.0001.). Also
remember that WiFi aggregates so transmissions have
multiple packets and long transmits will consume
those 10K tx ops. One way to get around aggregation
is to use voice (VO) access class which many devices
won't aggregate (mileage will vary.). Then take a
packets per second measurement with small packets.
This would give an idea on the frame scheduling
being AP based vs EDCA. <br>
<br>
Also, measuring ping time as a proxy for latency
isn't ideal. Better to measure trip times of the
actual traffic. This requires clock sync to a
common reference. GPS atomic clocks are available
but it does take some setup work.<br>
<br>
I haven't thought about RU optimizations and that
testing so can't really comment there. <br>
<br>
Also, I'd consider replacing the mechanical turn
table with variable phase shifters and set them in
the MIMO (or H-Matrix) path. I use <a
href="https://www.apitech.com/globalassets/documents/products/rf-microwave-microelectronics-power-solutions/rf-components/phase-shifter-subsystem/wmod84208421.pdf"
target="_blank" moz-do-not-send="true">model 8421
from Aeroflex</a>. Others make them too.<br>
<br>
</div>
</blockquote>
</div>
</blockquote>
<font size="-1"><font face="Times New Roman, Times, serif">Thanks
again for the suggestions. I agree latency is very high
when I remove the traffic bandwidth caps. I don't know
why. One of the key questions I've had since starting to
mess with OFDMA is whether it helps under light or heavy
traffic load. All I do know is that things go to hell
when you load the channel. And RRUL test methods
essentially break OFDMA.<br>
<br>
I agree using ping isn't ideal. But I'm approaching this
as creating a test that a consumer audience can
understand. Ping is something consumers care about and
understand. The octoScope STApals are all ntp sync'd
and latency measurements using iperf have been done by
them.<br>
<br>
<br>
</font></font> </div>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>