[Make-wifi-fast] SmallNetBuilder article: Does OFDMA Really Work?

Bob McMahon bob.mcmahon at broadcom.com
Fri May 15 16:05:06 EDT 2020


iperf 2.0.14 supports --connect-only tests and also shows the connect
times.  It's currently broken and I plan to fix it soon.

My brother works for NASA and when designing the shuttle engineers focussed
on weight/mass because the energy required to achieve low earth orbit is
driven by that. My brother, a PhD in fracture mechanics, said weight
without consideration for structural integrity trade offs was a mistake.

In that analogy, latency and bloat, while correlated, aren't the same
thing. I think by separating them one can better understand how a system
will perform.  I suspect your tests with 120ms of latency are really
measuring bloat.  Little's law is average queue depth = average effective
arrival rate * average service time.  Bloat is mostly about excessive queue
depths and latency mostly about excessive service times. Since they affect
one another it's easy to conflate them.

Bob



On Fri, May 15, 2020 at 12:50 PM Tim Higgins <tim at smallnetbuilder.com>
wrote:

> Thanks for the additional insights, Bob. How do you measure TCP connects?
>
> Does Dave or anyone else on the bufferbloat team want to comment on Bob's
> comment that latency testing under "heavy traffic" isn't ideal?
>
> My impression is that the rtt_fair_var test I used in the article and
> other RRUL-related Flent tests fully load the connection under test. Am I
> incorrect?
>
> ===
> On 5/15/2020 3:36 PM, Bob McMahon wrote:
>
> Latency testing under "heavy traffic" isn't ideal.  If the input
> rate exceeds the service rate of any queue for any period of time the queue
> fills up and latency hits a worst case per that queue depth.  I'd say take
> latency measurements when the input rates are below the service rates. The
> measurements when service rates are less than input rates are less
> about latency and more about bloat.
>
> Also, a good paper is this one on trading bandwidth for ultra low latency
> <https://people.csail.mit.edu/alizadeh/papers/hull-nsdi12.pdf>using
> phantom queues and ECN.
>
> Another thing to consider is that network engineers tend to have a mioptic
> view of latency.  The queueing or delay between the socket writes/reads and
> network stack matters too.  Network engineers focus on packets or TCP RTTs
> and somewhat overlook a user's true end to end experience.  Avoiding bloat
> by slowing down the writes, e.g. ECN or different scheduling, still
> contributes to end/end latency between the writes() and the reads() that
> too few test for and monitor.
>
> Note: We're moving to trip times of writes to reads (or frames for video)
> for our testing. We are also replacing/supplementing pings with TCP
> connects as other "latency related" measurements. TCP connects are more
> important than ping.
>
> Bob
>
> On Fri, May 15, 2020 at 8:20 AM Tim Higgins <tim at smallnetbuilder.com>
> wrote:
>
>> Hi Bob,
>>
>> Thanks for your comments and feedback. Responses below:
>>
>> On 5/14/2020 5:42 PM, Bob McMahon wrote:
>>
>> Also, forgot to mention, for latency don't rely on average as most don't
>> care about that.  Maybe use the upper 3 stdev, i.e. the 99.97% point.  Our
>> latency runs will repeat 20 seconds worth of packets and find that then
>> calculate CDFs of this point in the tail across hundreds of runs under
>> different conditions. One "slow packet" is all that it takes to screw up
>> user experience when it comes to latency.
>>
>> Thanks for the guidance.
>>
>>
>> On Thu, May 14, 2020 at 2:38 PM Bob McMahon <bob.mcmahon at broadcom.com>
>> wrote:
>>
>>> I haven't looked closely at OFDMA but these latency numbers seem way too
>>> high for it to matter.  Why is the latency so high?  It suggests there may
>>> be queueing delay (bloat) unrelated to media access.
>>>
>>> Also, one aspect is that OFDMA is replacing EDCA with AP scheduling per
>>> trigger frame.  EDCA kinda sucks per listen before talk which is about 100
>>> microseconds on average which has to be paid even when no energy detect.
>>> This limits the transmits per second performance to 10K (1/0.0001.). Also
>>> remember that WiFi aggregates so transmissions have multiple packets and
>>> long transmits will consume those 10K tx ops. One way to get around
>>> aggregation is to use voice (VO) access class which many devices won't
>>> aggregate (mileage will vary.). Then take a packets per second
>>> measurement with small packets.  This would give an idea on the frame
>>> scheduling being AP based vs EDCA.
>>>
>>> Also, measuring ping time as a proxy for latency isn't ideal. Better to
>>> measure trip times of the actual traffic.  This requires clock sync to a
>>> common reference. GPS atomic clocks are available but it does take some
>>> setup work.
>>>
>>> I haven't thought about RU optimizations and that testing so can't
>>> really comment there.
>>>
>>> Also, I'd consider replacing the mechanical turn table with variable
>>> phase shifters and set them in the MIMO (or H-Matrix) path.  I use model
>>> 8421 from Aeroflex
>>> <https://www.apitech.com/globalassets/documents/products/rf-microwave-microelectronics-power-solutions/rf-components/phase-shifter-subsystem/wmod84208421.pdf>.
>>> Others make them too.
>>>
>>> Thanks again for the suggestions. I agree latency is very high when I
>> remove the traffic bandwidth caps. I don't know why. One of the key
>> questions I've had since starting to mess with OFDMA is whether it helps
>> under light or heavy traffic load. All I do know is that things go to hell
>> when you load the channel. And RRUL test methods essentially break OFDMA.
>>
>> I agree using ping isn't ideal. But I'm approaching this as creating a
>> test that a consumer audience can understand. Ping is something consumers
>> care about and understand.  The octoScope STApals are all ntp sync'd and
>> latency measurements using iperf have been done by them.
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20200515/27ddefcf/attachment.html>


More information about the Make-wifi-fast mailing list