<div dir="ltr">Here's a deck on RF topologies. This is not enough. Folks need to understand multivariate analysis and multivariate statistics, covariance matrices, eigenvalues & eigenvectors, etc. Know how to program in python. Have expertise in every OS. And the list goes on. <br><br>Wi-Fi is at 20B devices on the way to 100B and more. It's a huge undertaking and is expensive.<br><br>Bob</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Sep 2, 2024 at 10:28 AM Bob McMahon <<a href="mailto:bob.mcmahon@broadcom.com">bob.mcmahon@broadcom.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">This is David's experience. It doesn't extrapolate to the industry. Our testing as a component supplier is quite extensive. The level of math required likely equals ML. The table stakes for a 2 BSS system with hidden nodes, etc is $80K. That's just equipment. Then test engineers with deep expertise of 802.11 have to be hired. And they have to continuously learn as 802.11 is a living standard. And now they need to learn CCAs and network marking planes. Then this all has to be paid for typically through component sells as there are no software SKUs. <div dir="auto"><br></div><div dir="auto">The cadences for new ASICs is 24 months. The cadences for OSP upgrades is 10 to 20 years. <div dir="auto"><br></div><div dir="auto">Of course testing is under funded. No stock b.s. to pay the bills. It has to come from discounted cash flows.</div><div dir="auto"><br></div><div dir="auto">Everyone wants the experts to work for free. Iperf2 is that already. I don't see any more freebies on the horizon.</div><div dir="auto"><div dir="auto"><br></div><div dir="auto">Bob</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Sep 1, 2024, 10:05 PM David Lang via Make-wifi-fast <<a href="mailto:make-wifi-fast@lists.bufferbloat.net" target="_blank">make-wifi-fast@lists.bufferbloat.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Sun, 1 Sep 2024, Hal Murray via Make-wifi-fast wrote:<br>
<br>
> David Lang said:<br>
>> It really doesn't help that everyone in the industry is pushing for<br>
>> higher bandwidth for a single host. That's a nice benchmark number, but<br>
>> not really relevant int he real world.<br>
><br>
>> Even mu-mimo is of limited use as most routers only handle a handful of<br>
>> clients.<br>
><br>
>> But the biggest problem is just the push to use wider channels and gain<br>
>> efficiency in long-running bulk transfers by bundling lots of IP packets<br>
>> into a single transmission. This works well when you don't have<br>
>> congestion and have a small number of clients. But when you have lots of<br>
>> clients, spanning many generations of wifi technology, you need to go to<br>
>> narrower channels, but more separate routers to maximize the fairness of<br>
>> available airtime. <br>
><br>
> What does that say about the minimal collection of gear required in a test <br>
> lab?<br>
><br>
> If you had a lab with plenty of gear, what tests would you run?<br>
<br>
I'll start off by saying that my experience is from practical in-the-field uses, <br>
deploying wifi to support thousands of users in a conference setting. It's <br>
possible that some people are doing the tests I describe below in their labs, <br>
but from the way routers and wifi standards are advertised and the guides to <br>
deploy them are written, it doesn't seem like they are.<br>
<br>
My belief is that most of the tests are done in relatively clean RF environments <br>
where only the devices on the test network exist, and they can always hear <br>
everyone on the network. In such environments, everything about existing wifi <br>
standards and the push for higher bandwidth channels makes a lot of sense (there <br>
are still some latency problems)<br>
<br>
But the world outside the lab is far more complex<br>
<br>
you need to simulate a dispursed, congested RF environment. This includes hidden <br>
transmitters (stations A-B-C where B can hear A and C but A and C cannot hear <br>
each other), dealing with weak signals (already covered), interactions of <br>
independent networks on the same channels (a-b and c-d that cannot talk to each <br>
other), legacy equipment on the network (as slow as 802.11g at least, if not <br>
802.11b to simulate old IoT devices), and a mix of bulk-transfers <br>
(download/uploads), buffered streaming (constant traffic, but buffered so not <br>
super-sentitive to latency), unbuffered streaming (low bandwidth, but sensitive <br>
to latency), and short, latency sensitive traffic (things that block other <br>
traffic until they are answered, like DNS, http cache checks, http main pages <br>
that they pull lots of other URLs, etc)<br>
<br>
test large number of people in a given area (start with an all wireless office, <br>
then move on to classroom density), test not just one room, but multiple rooms <br>
that partially hear each other (the amount of attenuation or reflection between <br>
the rooms needs to vary). The ultimate density test would be a stadium-type <br>
setting where you have rows of chairs, but not tables and everyone is trying to <br>
livestream (or view a livestream) at once.<br>
<br>
Test not just the ultra-wide bandwidth with a single AP in the rooms, but <br>
narrower channels with multiple APs distributed around the rooms. Test APs <br>
positioned high, and set to high power to have large coverage areas against APs <br>
positioned low (signals get absorbed by people, so channels can be reused at <br>
shorter distances) and set to low power (microcell approach). Test APs overhead <br>
with directional antennas so they cover a small footprint.<br>
<br>
Test with different types of walls around/between the rooms, metal studs and <br>
sheetrock of a modern office have very little affect on signals, stone/brick <br>
walls of old buildings (and concrete walls in some areas of new buildings) <br>
absorb the signal, the metal grid in movable air walls blocks and reflects <br>
signals<br>
<br>
Remember that these are operating in 'unlicensed' spectrum, and so you can have <br>
other devices operating here as well causing periodic interference (which could <br>
show up as short segments of corruption or just an increased noise floor). <br>
Current wifi standards interpret any failed signals as a weak signal, so they <br>
drop down to a slower modulation or increasing power in the hope of getting the <br>
signal through. If the problem is actually interference from other devices <br>
(especially other APs that it can't decipher), the result is that all stations <br>
end up yelling slowly to try and get through and the result is very high levels <br>
of noise and no messages getting through. Somehow, the systems should detect <br>
that the noise floor is high and/or that there is other stuff happening on the <br>
network that they can hear, but not necessarily decipher and switch away from <br>
the 'weak signal' mode of operation (which is appropriate in sparse <br>
environments), and instead work to talk faster and at lower power to try and <br>
reduce the overall interference while still getting their signal through. <br>
(it does no good for one station to be transmitting at 3w while the station it's <br>
talking to is transmitting at 50mw). As far as I know, there is currently no way <br>
for stations to signal what power they are using (and the effective power would <br>
be modified by the antenna system, both transmitted and received), so this may <br>
be that something like 'I'm transmitting at 50% of my max and I hear you at 30% <br>
with noise at 10%' <-> 'I'm transmitting at 100% of my max and I hear you at 80% <br>
woth noise at 30%' could cause the first station to cut down on it's power until <br>
the two are hearing each other at similar levels (pure speculation here, <br>
suggestion for research ideas)<br>
<br>
> How many different tests would it take to give reasonable coverage?<br>
<br>
That's hard for me to say, and not every device needs to go through every test. <br>
But when working on a new standard, it needs to go through a lot of these tests, <br>
the most important ones IMHO are how they work with a high density of users <br>
accessing multiple routers which are distributed so there is overlapping <br>
coverage and include a mix of network traffic.<br>
<br>
David Lang<br>
_______________________________________________<br>
Make-wifi-fast mailing list<br>
<a href="mailto:Make-wifi-fast@lists.bufferbloat.net" rel="noreferrer" target="_blank">Make-wifi-fast@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/make-wifi-fast" rel="noreferrer noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/make-wifi-fast</a></blockquote></div>
</blockquote></div>
<br>
<span style="background-color:rgb(255,255,255)"><font size="2">This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.</font></span>