[Make-wifi-fast] mesh deployment with ath9k driver changes

Jannie Hanekom jannie at hanekom.net
Sat Jun 30 16:04:57 EDT 2018


That's awesome :-)  Thanks for the *incredibly* carefully-crafted response - particularly for the references.  Learning more from that than from years of reading mundane forum posts.  Will take a while to digest.

-----Oorspronklike Boodskap-----
Van: bkil.hu at gmail.com [mailto:bkil.hu at gmail.com] Namens bkil
Gestuur: Saturday, 30 June 2018 21:26
Aan: Jannie Hanekom <jannie at hanekom.co.za>; make-wifi-fast at lists.bufferbloat.net
Onderwerp: Re: [Make-wifi-fast] mesh deployment with ath9k driver changes

Dear Jannie,

Thanks for the words of caution. I've just now noticed that you've only sent your reply to me, but let me forward it to the list as well.

I just made that dtim_period example up. I don't have a hard recommendation about it other than the more you increase it the more power you save (up to a point when your devices loose sync) and at the same time the more broadcast latency you add, so producing a dtim at least once a second is probably a good idea. That isn't far from the recommendation you suggest.

Indeed there can exist defective devices sensitive to beacon_int, though I wouldn't worry too much for such a small increment. For example, on a passive scanning device, I've noticed that the time of discovery increased from 1-2 second to up to 10, but all was fine otherwise.

mcast_rate is consumed by wpa_supplicant and only applies to IBSS and 11s configurations:

https://github.com/lede-project/source/blob/96f4792fdb036ecf5c8417fce6503412b0b27e5f/package/kernel/mac80211/files/lib/netifd/wireless/mac80211.sh#L604
https://github.com/lede-project/source/blob/96f4792fdb036ecf5c8417fce6503412b0b27e5f/package/kernel/mac80211/files/lib/netifd/wireless/mac80211.sh#L617

I think multicast-to-unicast is already enabled by default in OpenWrt/LEDE, but do check.

Do note that most contention and wastage should probably be caused by the clients and not the AP. The air time savings, if any, should come from the following factors:

* beacon, probe response, ACK, power saving and other management frames,
* less general overhead (like preamble and spacing),
* reduced bandwidth causing less interference to neighboring channels and vice versa, also allowing for 4 channels,
* working around ill rate schedulers that interpret loss as noise instead of interference (faster retries, fail instead of 1M, smaller probability of further collision),
* reduce range (against sticky clients), thereby facilitate higher average rate and lower average air time.

The symbols corresponding to the preambles are transmitted at a fixed low rate, and not the basic rate. I.e., if you set the mandatory/basic rates of your AP to only contain 54Mb, other stations could still decode the length of the transmission so they can refrain from medium access for that duration. Thus increasing the basic rate from 6 to 12Mb would not have any negative effect on this aspect. Decoding of the beacons is anyway only useful for associated (or associating) stations, not for outsiders (except for some cool new IE's).

https://www.revolutionwifi.net/revolutionwifi/2011/03/understanding-wi-fi-carrier-sense.html
https://mrncciew.com/2014/10/14/cwap-802-11-phy-ppdu/
https://flylib.com/books/en/2.519.1/erp_physical_layer_convergence_plcp_.html
http://divdyn.com/so-called-ghost-frames-not-exist/

Anyway, if a station couldn't decode a frame, they can still use the CCA energy detector that is about 15-20dB less sensitive. If the exposed node problem also shows in the deployment in question, this could actually be an advantage.

I think that the single frequency backhaul should be the one mostly being hidden from many of the clients, so many decode errors and collisions would happen against this link too, not between stations that are very close to each other around a cabin. A wilder guess is that stations between the two cabins could be interfering with each other a bit more.

For maximal wireless power saving, it is best to always use the highest, single-stream modulation possible preferably on a wide channel, and definitely not the lowest rates.

This is because it is equally true for the chipset, radio DSP and other supporting hardware that they consume a fairly constant power over time, so you should operate them for the shortest amount of time possible for both transmission and reception.

http://www.ruf.rice.edu/~mobile/elec518/lectures/3-wireless.pdf
http://static.usenix.org/event/hotpower/tech/full_papers/Halperin.pdf
http://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-pathak.pdf

The record is 14.5 uW @ 1Mb/s and 59.2 uW @ 11Mb/s for backscatter, but the cost/benefit ratio still favors the faster speeds (i.e., disregarding many power hungry components):
https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-kellogg.pdf

There is one exception: I usually see a bit lower power requirement in datasheets corresponding to 802.11b rates and a few simpler modulation schemes. However, the overall system energy use will probably be greater using these slow rates for the same number of bits transferred.

http://cdn.viaembedded.com/eol_products/docs/vnt6656/datasheet/VIA+VNT6656_datasheet_v130306.pdf

https://www.ti.com/pdfs/bcg/80211_wp_lowpower.pdf

There can exist pathological cases involving very short packets not filling up the number of constructed symbols efficiently, but I guess these should not skew the statistics a lot.

Also, many wifi chipsets have a calibration table describing the maximal TX power per rate at which the output signal is clean enough.
Higher rates usually allow for a lower maximal TX power, so as you increase rate, you may sometimes need to reduce TX power as well, thus reducing power consumption a bit.

http://www.seeedstudio.com/document/word/WT8266-S1%20DataSheet%20V1.0.pdf

I've also noticed iw station dump indicating that many devices idle at very low rates, but isn't this just because of the power saving packets and management frames? I don't think that they reduce rate on purpose to save power. It is easy to check this with Wireshark, though.

Cheers

On Tue, Jun 12, 2018 at 5:22 PM, Jannie Hanekom <jannie at hanekom.co.za> wrote:
> Disclaimer: what I know about low-level WiFi is perhaps somewhat 
> dangerous, and I'm certainly not a developer.  I have however 
> implemented a few corporate wireless solutions by different vendors, 
> and have mucked about with a number of personal OpenWRT projects over the past decade.
>
>> option dtim_period 5 # cheap power saving
> I'm told Apple suggests 3.  I'm not sure why.  As a corporate wireless 
> guy, I trust Andrew von Nagy on that advice:
> https://twitter.com/revolutionwifi/status/725489216768106496
>
>> option beacon_int 200 # power/bandwidth saving
> Additional suggestion:  Beacon Interval is a per-SSID setting.  
> Consider leaving it at defaults (100) for "client-facing" SSIDs and 
> set it to higher values for your Mesh SSIDs.  Just in case of 
> compatibility issues... (I'm not aware of any, but I've never really 
> tried.)
>
>> legacy_rates=0 seems to be an alias for enabling all g-rates and 
>> disabling b-rates
> Just my 2c to the $1,000,000 already contributed: Absolutely go for it.
> I've never had any issues disabling legacy 11b rates in the corporate 
> and hospitality world, or on my personal projects.  It's one of the 
> first things I disable on any project I undertake.
>
> Also look at mcast_rate and/or multicast_to_unicast.  Multicasts are - 
> by default - supposed to be sent at the lowest basic rate IIRC, just 
> like beacons.  There shouldn't be much multicast on most networks in 
> terms of volume, but things like mDNS do exist and are quite 
> prevalent.  Depending on what you find when you sniff, there may be merit to tinkering with those.
>
> I have no direct experience of this, but I'm told that one should be 
> careful not to set the slowest basic_rate setting too high (i.e. 
> higher than 6Mbps.) Reason is that if you have a client (or another 
> AP) seeing the signal at -80dB, they may still be able to decode a 
> 6Mbps beacon and apply normal WiFi co-existence niceties, but they may 
> not be able to decode a 12Mbps beacon resulting in them identifying 
> the signal as non-WiFi, causing them to back off more aggressively.
>
> The other reason is that many devices select the lowest basic rate 
> under sleep conditions in order to save battery power.  I'm not sure 
> what the impact would be if one sets a much higher basic rate.
>
> From reading of OpenWRT forum posts over the years, most people who 
> set the basic_rate higher than 6Mbps do so in an attempt to get rid of "sticky"
> clients.  I can't remember the rationale exactly, but setting the 
> basic_rate higher is unlikely to address that problem, and one should 
> rather rely on other mechanisms.
>
> Also, beyond 6Mbps, the airtime gains through reducing the time it 
> takes to transmit beacons diminishes greatly.  Nice calculator:
> http://www.revolutionwifi.net/revolutionwifi/p/ssid-overhead-calculator.html.
>
> Jannie
>



More information about the Make-wifi-fast mailing list