[Bloat] [Make-wifi-fast] [Starlink] bloat on wifi8 and 802.11 wg
David Lang
david at lang.hm
Mon Sep 2 23:20:29 EDT 2024
Bob McMahon wrote:
> This is David's experience. It doesn't extrapolate to the industry.
If I didn't make it clear, I have no inside information on the industry, I am
operating from the point of view of a consumer useing the products and looking
at how they are advertised and how they work in practice.
My frustration is less with the product manufacturers than it is with the
standards people. I don't expect a product manufacturer to test beyond 'does it
meet the standard' (with some interoperability to see if everyone is
interpreting the standard the same way)
But the people drafting the standards (which may include some of the
manufacturers), do need to be doing the more expensive and extensive testing for
real-world conditions, not just the easier to test lab conditions. And that's
where I don't see much progress over the years.
If you live on a house with a 1 acre lot or larger, the current standards will
work well for you. In a school or apartment building, there seems to be a lack
of progress at the standards level (and therefor at the product level) from the
time that the OLPC first attempted to do serious work in high density
environments.
David Lang
> Our
> testing as a component supplier is quite extensive. The level of math
> required likely equals ML. The table stakes for a 2 BSS system with hidden
> nodes, etc is $80K. That's just equipment. Then test engineers with deep
> expertise of 802.11 have to be hired. And they have to continuously learn
> as 802.11 is a living standard. And now they need to learn CCAs and network
> marking planes. Then this all has to be paid for typically through
> component sells as there are no software SKUs.
>
> The cadences for new ASICs is 24 months. The cadences for OSP upgrades is
> 10 to 20 years.
>
> Of course testing is under funded. No stock b.s. to pay the bills. It has
> to come from discounted cash flows.
>
> Everyone wants the experts to work for free. Iperf2 is that already. I
> don't see any more freebies on the horizon.
>
> Bob
>
> On Sun, Sep 1, 2024, 10:05 PM David Lang via Make-wifi-fast <
> make-wifi-fast at lists.bufferbloat.net> wrote:
>
>> On Sun, 1 Sep 2024, Hal Murray via Make-wifi-fast wrote:
>>
>>> David Lang said:
>>>> It really doesn't help that everyone in the industry is pushing for
>>>> higher bandwidth for a single host. That's a nice benchmark number, but
>>>> not really relevant int he real world.
>>>
>>>> Even mu-mimo is of limited use as most routers only handle a handful of
>>>> clients.
>>>
>>>> But the biggest problem is just the push to use wider channels and gain
>>>> efficiency in long-running bulk transfers by bundling lots of IP packets
>>>> into a single transmission. This works well when you don't have
>>>> congestion and have a small number of clients. But when you have lots
>> of
>>>> clients, spanning many generations of wifi technology, you need to go
>> to
>>>> narrower channels, but more separate routers to maximize the fairness
>> of
>>>> available airtime.
>>>
>>> What does that say about the minimal collection of gear required in a
>> test
>>> lab?
>>>
>>> If you had a lab with plenty of gear, what tests would you run?
>>
>> I'll start off by saying that my experience is from practical in-the-field
>> uses,
>> deploying wifi to support thousands of users in a conference setting. It's
>> possible that some people are doing the tests I describe below in their
>> labs,
>> but from the way routers and wifi standards are advertised and the guides
>> to
>> deploy them are written, it doesn't seem like they are.
>>
>> My belief is that most of the tests are done in relatively clean RF
>> environments
>> where only the devices on the test network exist, and they can always hear
>> everyone on the network. In such environments, everything about existing
>> wifi
>> standards and the push for higher bandwidth channels makes a lot of sense
>> (there
>> are still some latency problems)
>>
>> But the world outside the lab is far more complex
>>
>> you need to simulate a dispursed, congested RF environment. This includes
>> hidden
>> transmitters (stations A-B-C where B can hear A and C but A and C cannot
>> hear
>> each other), dealing with weak signals (already covered), interactions of
>> independent networks on the same channels (a-b and c-d that cannot talk to
>> each
>> other), legacy equipment on the network (as slow as 802.11g at least, if
>> not
>> 802.11b to simulate old IoT devices), and a mix of bulk-transfers
>> (download/uploads), buffered streaming (constant traffic, but buffered so
>> not
>> super-sentitive to latency), unbuffered streaming (low bandwidth, but
>> sensitive
>> to latency), and short, latency sensitive traffic (things that block other
>> traffic until they are answered, like DNS, http cache checks, http main
>> pages
>> that they pull lots of other URLs, etc)
>>
>> test large number of people in a given area (start with an all wireless
>> office,
>> then move on to classroom density), test not just one room, but multiple
>> rooms
>> that partially hear each other (the amount of attenuation or reflection
>> between
>> the rooms needs to vary). The ultimate density test would be a
>> stadium-type
>> setting where you have rows of chairs, but not tables and everyone is
>> trying to
>> livestream (or view a livestream) at once.
>>
>> Test not just the ultra-wide bandwidth with a single AP in the rooms, but
>> narrower channels with multiple APs distributed around the rooms. Test APs
>> positioned high, and set to high power to have large coverage areas
>> against APs
>> positioned low (signals get absorbed by people, so channels can be reused
>> at
>> shorter distances) and set to low power (microcell approach). Test APs
>> overhead
>> with directional antennas so they cover a small footprint.
>>
>> Test with different types of walls around/between the rooms, metal studs
>> and
>> sheetrock of a modern office have very little affect on signals,
>> stone/brick
>> walls of old buildings (and concrete walls in some areas of new buildings)
>> absorb the signal, the metal grid in movable air walls blocks and reflects
>> signals
>>
>> Remember that these are operating in 'unlicensed' spectrum, and so you can
>> have
>> other devices operating here as well causing periodic interference (which
>> could
>> show up as short segments of corruption or just an increased noise floor).
>> Current wifi standards interpret any failed signals as a weak signal, so
>> they
>> drop down to a slower modulation or increasing power in the hope of
>> getting the
>> signal through. If the problem is actually interference from other devices
>> (especially other APs that it can't decipher), the result is that all
>> stations
>> end up yelling slowly to try and get through and the result is very high
>> levels
>> of noise and no messages getting through. Somehow, the systems should
>> detect
>> that the noise floor is high and/or that there is other stuff happening on
>> the
>> network that they can hear, but not necessarily decipher and switch away
>> from
>> the 'weak signal' mode of operation (which is appropriate in sparse
>> environments), and instead work to talk faster and at lower power to try
>> and
>> reduce the overall interference while still getting their signal through.
>> (it does no good for one station to be transmitting at 3w while the
>> station it's
>> talking to is transmitting at 50mw). As far as I know, there is currently
>> no way
>> for stations to signal what power they are using (and the effective power
>> would
>> be modified by the antenna system, both transmitted and received), so this
>> may
>> be that something like 'I'm transmitting at 50% of my max and I hear you
>> at 30%
>> with noise at 10%' <-> 'I'm transmitting at 100% of my max and I hear you
>> at 80%
>> woth noise at 30%' could cause the first station to cut down on it's power
>> until
>> the two are hearing each other at similar levels (pure speculation here,
>> suggestion for research ideas)
>>
>>> How many different tests would it take to give reasonable coverage?
>>
>> That's hard for me to say, and not every device needs to go through every
>> test.
>> But when working on a new standard, it needs to go through a lot of these
>> tests,
>> the most important ones IMHO are how they work with a high density of
>> users
>> accessing multiple routers which are distributed so there is overlapping
>> coverage and include a mix of network traffic.
>>
>> David Lang
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
More information about the Bloat
mailing list