[Bloat] [Make-wifi-fast] [Starlink] bloat on wifi8 and 802.11 wg
David Lang
david at lang.hm
Mon Sep 2 01:05:03 EDT 2024
On Sun, 1 Sep 2024, Hal Murray via Make-wifi-fast wrote:
> David Lang said:
>> It really doesn't help that everyone in the industry is pushing for
>> higher bandwidth for a single host. That's a nice benchmark number, but
>> not really relevant int he real world.
>
>> Even mu-mimo is of limited use as most routers only handle a handful of
>> clients.
>
>> But the biggest problem is just the push to use wider channels and gain
>> efficiency in long-running bulk transfers by bundling lots of IP packets
>> into a single transmission. This works well when you don't have
>> congestion and have a small number of clients. But when you have lots of
>> clients, spanning many generations of wifi technology, you need to go to
>> narrower channels, but more separate routers to maximize the fairness of
>> available airtime.
>
> What does that say about the minimal collection of gear required in a test
> lab?
>
> If you had a lab with plenty of gear, what tests would you run?
I'll start off by saying that my experience is from practical in-the-field uses,
deploying wifi to support thousands of users in a conference setting. It's
possible that some people are doing the tests I describe below in their labs,
but from the way routers and wifi standards are advertised and the guides to
deploy them are written, it doesn't seem like they are.
My belief is that most of the tests are done in relatively clean RF environments
where only the devices on the test network exist, and they can always hear
everyone on the network. In such environments, everything about existing wifi
standards and the push for higher bandwidth channels makes a lot of sense (there
are still some latency problems)
But the world outside the lab is far more complex
you need to simulate a dispursed, congested RF environment. This includes hidden
transmitters (stations A-B-C where B can hear A and C but A and C cannot hear
each other), dealing with weak signals (already covered), interactions of
independent networks on the same channels (a-b and c-d that cannot talk to each
other), legacy equipment on the network (as slow as 802.11g at least, if not
802.11b to simulate old IoT devices), and a mix of bulk-transfers
(download/uploads), buffered streaming (constant traffic, but buffered so not
super-sentitive to latency), unbuffered streaming (low bandwidth, but sensitive
to latency), and short, latency sensitive traffic (things that block other
traffic until they are answered, like DNS, http cache checks, http main pages
that they pull lots of other URLs, etc)
test large number of people in a given area (start with an all wireless office,
then move on to classroom density), test not just one room, but multiple rooms
that partially hear each other (the amount of attenuation or reflection between
the rooms needs to vary). The ultimate density test would be a stadium-type
setting where you have rows of chairs, but not tables and everyone is trying to
livestream (or view a livestream) at once.
Test not just the ultra-wide bandwidth with a single AP in the rooms, but
narrower channels with multiple APs distributed around the rooms. Test APs
positioned high, and set to high power to have large coverage areas against APs
positioned low (signals get absorbed by people, so channels can be reused at
shorter distances) and set to low power (microcell approach). Test APs overhead
with directional antennas so they cover a small footprint.
Test with different types of walls around/between the rooms, metal studs and
sheetrock of a modern office have very little affect on signals, stone/brick
walls of old buildings (and concrete walls in some areas of new buildings)
absorb the signal, the metal grid in movable air walls blocks and reflects
signals
Remember that these are operating in 'unlicensed' spectrum, and so you can have
other devices operating here as well causing periodic interference (which could
show up as short segments of corruption or just an increased noise floor).
Current wifi standards interpret any failed signals as a weak signal, so they
drop down to a slower modulation or increasing power in the hope of getting the
signal through. If the problem is actually interference from other devices
(especially other APs that it can't decipher), the result is that all stations
end up yelling slowly to try and get through and the result is very high levels
of noise and no messages getting through. Somehow, the systems should detect
that the noise floor is high and/or that there is other stuff happening on the
network that they can hear, but not necessarily decipher and switch away from
the 'weak signal' mode of operation (which is appropriate in sparse
environments), and instead work to talk faster and at lower power to try and
reduce the overall interference while still getting their signal through.
(it does no good for one station to be transmitting at 3w while the station it's
talking to is transmitting at 50mw). As far as I know, there is currently no way
for stations to signal what power they are using (and the effective power would
be modified by the antenna system, both transmitted and received), so this may
be that something like 'I'm transmitting at 50% of my max and I hear you at 30%
with noise at 10%' <-> 'I'm transmitting at 100% of my max and I hear you at 80%
woth noise at 30%' could cause the first station to cut down on it's power until
the two are hearing each other at similar levels (pure speculation here,
suggestion for research ideas)
> How many different tests would it take to give reasonable coverage?
That's hard for me to say, and not every device needs to go through every test.
But when working on a new standard, it needs to go through a lot of these tests,
the most important ones IMHO are how they work with a high density of users
accessing multiple routers which are distributed so there is overlapping
coverage and include a mix of network traffic.
David Lang
More information about the Bloat
mailing list