* [Make-wifi-fast] mesh deployment with ath9k driver changes @ 2018-04-24 8:33 Pete Heist 2018-04-24 11:54 ` Toke Høiland-Jørgensen ` (2 more replies) 0 siblings, 3 replies; 56+ messages in thread From: Pete Heist @ 2018-04-24 8:33 UTC (permalink / raw) To: make-wifi-fast [-- Attachment #1: Type: text/plain, Size: 1637 bytes --] I have a 7 node (3 of which are repeaters) mesh deployed at a campground using Open Mesh’s 6.4 beta (with LEDE and the ath9k changes). I set up an internal SmokePing instance with each AP as a target, and the first guests of the season are arriving (school kids, the best testers). The setup for our three repeaters is: Cabin 12: 110 meters and RSSI -69 to gateway, NLOS through a few leaves Cabin 20: 65 meters and RSSI -66 to gateway, LOS but maybe some fresnel zone intrusion from leaves or branches Cabin 28: 50 meters and RSSI -51 to gateway, clear LOS Attached are some PDFs of the current SmokePing results. The school arrived Monday morning and are mostly clustered around cabins 12 and 20, with a few around cabin 28, can you tell? :) Mean ping time for cabin 12 is around 200 ms during “active use”, with outliers above 1 second, which is higher than expected. I don’t have data collected on how many active users that is and what they’re doing, but there could be 40-50 students around the cabin 12 AP, with however many active "as is typical for kids”. I wonder how much of this is due to the NLOS situation for Cabin 12. But with no load, ping times don’t fluctuate much above a few ms. This weekend, I should have the first cluster of users around Cabin 28 (with clear LOS) to compare it to. Overall it would be nice to know, in a typical real-world setup, how much is WiFi latency is due to bufferbloat, and how much to the physical layer? Lastly, is there any interest in access to SmokePing results, or other diagnostics? Things are bound to get interesting as the season progresses… [-- Attachment #2: cabin_12.pdf --] [-- Type: application/pdf, Size: 241586 bytes --] [-- Attachment #3: cabin_20.pdf --] [-- Type: application/pdf, Size: 240145 bytes --] [-- Attachment #4: cabin_28.pdf --] [-- Type: application/pdf, Size: 256790 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 8:33 [Make-wifi-fast] mesh deployment with ath9k driver changes Pete Heist @ 2018-04-24 11:54 ` Toke Høiland-Jørgensen 2018-04-24 13:37 ` Pete Heist 2018-04-27 11:42 ` Valent Turkovic 2018-04-27 11:47 ` Valent Turkovic 2 siblings, 1 reply; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-04-24 11:54 UTC (permalink / raw) To: Pete Heist, make-wifi-fast Pete Heist <pete@eventide.io> writes: > I have a 7 node (3 of which are repeaters) mesh deployed at a campground using Open Mesh’s 6.4 beta (with LEDE and the ath9k changes). I set up an internal SmokePing instance with each AP as a target, and the first guests of the season are arriving (school kids, the best testers). The setup for our three repeaters is: > > Cabin 12: 110 meters and RSSI -69 to gateway, NLOS through a few leaves > Cabin 20: 65 meters and RSSI -66 to gateway, LOS but maybe some fresnel zone intrusion from leaves or branches > Cabin 28: 50 meters and RSSI -51 to gateway, clear LOS > > Attached are some PDFs of the current SmokePing results. The school > arrived Monday morning and are mostly clustered around cabins 12 and > 20, with a few around cabin 28, can you tell? :) Mean ping time for > cabin 12 is around 200 ms during “active use”, with outliers above 1 > second, which is higher than expected. I don’t have data collected on > how many active users that is and what they’re doing, but there could > be 40-50 students around the cabin 12 AP, with however many active "as > is typical for kids”. Hmm, yeah, 200ms seems quite high. Are there excessive collisions and retransmissions? Is the uplink on the same frequency as the clients? > Overall it would be nice to know, in a typical real-world setup, how > much is WiFi latency is due to bufferbloat, and how much to the > physical layer? On ath9k bufferbloat should be more than 10-20ms or so. > Lastly, is there any interest in access to SmokePing results, or other > diagnostics? Things are bound to get interesting as the season > progresses… Sure, feel free to keep us updated! :) -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 11:54 ` Toke Høiland-Jørgensen @ 2018-04-24 13:37 ` Pete Heist 2018-04-24 13:51 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-24 13:37 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: make-wifi-fast [-- Attachment #1: Type: text/plain, Size: 2129 bytes --] > On Apr 24, 2018, at 1:54 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Pete Heist <pete@eventide.io <mailto:pete@eventide.io>> writes: > >> Mean ping time for >> cabin 12 is around 200 ms during “active use”, with outliers above 1 >> second, which is higher than expected. I don’t have data collected on >> how many active users that is and what they’re doing, but there could >> be 40-50 students around the cabin 12 AP, with however many active "as >> is typical for kids”. > > Hmm, yeah, 200ms seems quite high. Are there excessive collisions and > retransmissions? Hrm, how would I know that actually? /proc/net/wireless has all zeroes in it. I don’t see it anywhere in output from ‘iw’... > Is the uplink on the same frequency as the clients? Most definitely, the OM2P-HS is a single channel (2.4 GHz) device, with dual antennas. I was hoping the new driver could make the best of this situation. :) Now, my ping test goes from the gateway straight to the repeater, so there’s only one WiFi hop in my ping results. I don’t know how pings actually look for clients while the AP is under load. I suppose I’ll either have to test that manually when I’m on site, or set up a fixed wireless SmokePing instance to simulate a client. I wish I could cable everything, but it isn’t physically practical. The next possibility is dual channel APs, or separate backhaul links, all costing something... Cabins 12 and 20 hang off the same gateway (which are all on the same channel, obviously). That will mean more collisions between them. Cabin 28 is the only repeater on its gateway, so is likely to be better. It’s an interesting setup from the standpoint that it’s not very large, but tests a few different single channel repeater scenarios. >> Overall it would be nice to know, in a typical real-world setup, how >> much is WiFi latency is due to bufferbloat, and how much to the >> physical layer? > > On ath9k bufferbloat should be more than 10-20ms or so. My pings can definitely be in the ether for longer than that, for some reason… :) [-- Attachment #2: Type: text/html, Size: 8017 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 13:37 ` Pete Heist @ 2018-04-24 13:51 ` Toke Høiland-Jørgensen 2018-04-24 14:09 ` Pete Heist 2018-04-26 0:35 ` David Lang 0 siblings, 2 replies; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-04-24 13:51 UTC (permalink / raw) To: Pete Heist; +Cc: make-wifi-fast Pete Heist <pete@eventide.io> writes: >> On Apr 24, 2018, at 1:54 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >> >> Pete Heist <pete@eventide.io <mailto:pete@eventide.io>> writes: >> >>> Mean ping time for >>> cabin 12 is around 200 ms during “active use”, with outliers above 1 >>> second, which is higher than expected. I don’t have data collected on >>> how many active users that is and what they’re doing, but there could >>> be 40-50 students around the cabin 12 AP, with however many active "as >>> is typical for kids”. >> >> Hmm, yeah, 200ms seems quite high. Are there excessive collisions and >> retransmissions? > > Hrm, how would I know that actually? /proc/net/wireless has all zeroes > in it. I don’t see it anywhere in output from ‘iw’... Assuming you have debugfs enabled you should be able to get aggregate statistics from /sys/kernel/debug/ieee80211/phy0/ath9k/xmit - at least that contains retries, but not backoff data, unfortunately. There's also the per-station rate data in /sys/kernel/debug/ieee80211/phy0/netdev\:*/stations/*/rc_stats >> Is the uplink on the same frequency as the clients? > > Most definitely, the OM2P-HS is a single channel (2.4 GHz) device, > with dual antennas. I was hoping the new driver could make the best of > this situation. :) Well, in that situation 'the best' may not be terribly good ;) > Now, my ping test goes from the gateway straight to the repeater, so > there’s only one WiFi hop in my ping results. I don’t know how pings > actually look for clients while the AP is under load. I suppose I’ll > either have to test that manually when I’m on site, or set up a fixed > wireless SmokePing instance to simulate a client. > > I wish I could cable everything, but it isn’t physically practical. > The next possibility is dual channel APs, or separate backhaul links, > all costing something... Yeah, a separate backhaul on a different channel would cut you contention in half, basically. Right now, each transmission has to occupy the channel twice... -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 13:51 ` Toke Høiland-Jørgensen @ 2018-04-24 14:09 ` Pete Heist 2018-04-24 14:34 ` Toke Høiland-Jørgensen 2018-04-26 0:35 ` David Lang 1 sibling, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-24 14:09 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: make-wifi-fast > On Apr 24, 2018, at 3:51 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Assuming you have debugfs enabled you should be able to get aggregate > statistics from /sys/kernel/debug/ieee80211/phy0/ath9k/xmit - at least > that contains retries, but not backoff data, unfortunately. If it’s the ratio of "AMPDUs Completed” to "AMPDUs Retried” or “AMPDUs XRetried” we’re looking at, there’s a stark difference between Cabin 12 and Cabin 28. That said, they have quite different traffic patterns, so I’ll have to evaluate this in a situation where traffic patterns are more similar... Cabin 12 (worst, two repeaters on gateway, NLOS): # cat /sys/kernel/debug/ieee80211/phy0/ath9k/xmit BE BK VI VO MPDUs Queued: 23330 735 112 304318 MPDUs Completed: 2096742 1468 144 6622260 MPDUs XRetried: 11932 509 74 274653 Aggregates: 1548313 31541 499 0 AMPDUs Queued HW: 0 0 0 0 AMPDUs Completed: 9674195 204610 22667 0 AMPDUs Retried: 1298897 22613 1025 0 AMPDUs XRetried: 45898 1189 120 0 TXERR Filtered: 7893 96 7 4 FIFO Underrun: 3 0 0 60 TXOP Exceeded: 0 0 0 0 TXTIMER Expiry: 0 0 0 0 DESC CFG Error: 0 0 0 0 DATA Underrun: 1 0 0 0 DELIM Underrun: 1 0 0 0 TX-Pkts-All: 11828767 207776 23005 6896913 TX-Bytes-All: 4193674267 197040003 23478961787460234 HW-put-tx-buf: 5318127 133067 23506 6395453 HW-tx-start: 0 0 0 0 HW-tx-proc-desc: 5690166 132905 23507 6891740 TX-Failed: 0 0 0 0 Cabin 28 (best, one repeater on gateway, LOS): # cat /sys/kernel/debug/ieee80211/phy0/ath9k/xmit BE BK VI VO MPDUs Queued: 23164 335 2 6929001 MPDUs Completed: 3218272 782 50 9311052 MPDUs XRetried: 6036 250 2 1821270 Aggregates: 6427093 20892 12 0 AMPDUs Queued HW: 0 0 0 0 AMPDUs Completed: 100430860 189959 375988 0 AMPDUs Retried: 1112896 8011 27 0 AMPDUs XRetried: 33862 449 2 0 TXERR Filtered: 2428 122 0 37 FIFO Underrun: 0 0 0 2 TXOP Exceeded: 0 0 0 0 TXTIMER Expiry: 0 0 0 0 DESC CFG Error: 0 0 0 0 DATA Underrun: 0 0 0 0 DELIM Underrun: 0 0 0 0 TX-Pkts-All: 103689030 191440 376042 11132322 TX-Bytes-All: 2783223251 210905451 601627081919930466 HW-put-tx-buf: 16245494 109264 376059 10911913 HW-tx-start: 0 0 0 0 HW-tx-proc-desc: 16282116 109333 376056 11131555 TX-Failed: 0 0 0 0 >> I wish I could cable everything, but it isn’t physically practical. >> The next possibility is dual channel APs, or separate backhaul links, >> all costing something... > > Yeah, a separate backhaul on a different channel would cut you > contention in half, basically. Right now, each transmission has to > occupy the channel twice... Could that cause mean RTT to go up 10x? :) In combination with having two repeaters on one gateway… ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 14:09 ` Pete Heist @ 2018-04-24 14:34 ` Toke Høiland-Jørgensen 2018-04-24 19:10 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-04-24 14:34 UTC (permalink / raw) To: Pete Heist; +Cc: make-wifi-fast Pete Heist <pete@eventide.io> writes: >> On Apr 24, 2018, at 3:51 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >> >> Assuming you have debugfs enabled you should be able to get aggregate >> statistics from /sys/kernel/debug/ieee80211/phy0/ath9k/xmit - at least >> that contains retries, but not backoff data, unfortunately. > > If it’s the ratio of "AMPDUs Completed” to "AMPDUs Retried” or “AMPDUs > XRetried” we’re looking at, there’s a stark difference between Cabin > 12 and Cabin 28. That said, they have quite different traffic > patterns, so I’ll have to evaluate this in a situation where traffic > patterns are more similar... Yeah. I'm not actually sure how exactly those numbers reflect what happens on the medium (nor what the difference between retried and xretried is), but it's an indication at least. >>> I wish I could cable everything, but it isn’t physically practical. >>> The next possibility is dual channel APs, or separate backhaul links, >>> all costing something... >> >> Yeah, a separate backhaul on a different channel would cut you >> contention in half, basically. Right now, each transmission has to >> occupy the channel twice... > > Could that cause mean RTT to go up 10x? :) In combination with having > two repeaters on one gateway… Not sure. You do get HOL blocking while a packet is retried, basically. And ath9k will keep trying way after it should have given up (up to 30 retries per packet). If you combine this with a lot of backoff for each transmission (which is also quite likely in a very congested setting), and I suppose it might be possible... -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 14:34 ` Toke Høiland-Jørgensen @ 2018-04-24 19:10 ` Pete Heist 2018-04-24 21:32 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-24 19:10 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: make-wifi-fast > On Apr 24, 2018, at 4:34 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Not sure. You do get HOL blocking while a packet is retried, basically. > And ath9k will keep trying way after it should have given up (up to 30 > retries per packet). If you combine this with a lot of backoff for each > transmission (which is also quite likely in a very congested setting), > and I suppose it might be possible… So that means everyone else waits while a packet is sent and re-sent to a station with a weak signal, for example? I see how that would wreck the latency pretty quick with the number of stations connected, plus channel contention with another repeater. It may be a much bigger factor than bloat. The physical situation is that there’s an AP on the roof of one cabin, and it’s surrounded by about 8 occupied cabins, each with 4 kids, at least one of which is probably curled up inside using Instagram or YouTube. Signals and tx bitrates of the stations are below, for interest. What I’ll try to find out is if it’s the activities of one or two devices that affect things the most, as it appears that mean latency can suddenly go down to < 10ms and up to > 200ms because of “something"... root@Cabin_12:~# iw dev ap0_1 station dump | grep "signal avg" signal avg: -60 [-67, -61] dBm signal avg: -51 [-58, -52] dBm signal avg: -61 [-64, -64] dBm signal avg: -60 [-61, -69] dBm signal avg: -70 [-72, -73] dBm signal avg: -70 [-71, -79] dBm signal avg: -75 [-78, -79] dBm signal avg: -73 [-79, -76] dBm signal avg: -75 [-80, -78] dBm signal avg: -68 [-69, -74] dBm signal avg: -76 [-77, -86] dBm signal avg: -84 [-87, -88] dBm signal avg: -79 [-81, -82] dBm signal avg: -76 [-81, -78] dBm signal avg: -58 [-59, -65] dBm signal avg: -72 [-74, -79] dBm signal avg: -74 [-81, -75] dBm signal avg: -80 [-82, -84] dBm signal avg: -65 [-69, -68] dBm signal avg: -78 [-81, -81] dBm signal avg: -68 [-69, -78] dBm signal avg: -72 [-73, -81] dBm signal avg: -70 [-76, -71] dBm signal avg: -80 [-81, -89] dBm signal avg: -75 [-78, -78] dBm signal avg: -72 [-76, -74] dBm signal avg: -78 [-82, -80] dBm signal avg: -63 [-66, -68] dBm signal avg: -78 [-85, -79] dBm signal avg: -75 [-78, -78] dBm signal avg: -65 [-69, -68] dBm root@Cabin_12:~# iw dev ap0_1 station dump | grep "tx bitrate" tx bitrate: 43.3 MBit/s MCS 4 short GI tx bitrate: 58.5 MBit/s MCS 6 tx bitrate: 57.8 MBit/s MCS 5 short GI tx bitrate: 72.2 MBit/s MCS 7 short GI tx bitrate: 52.0 MBit/s MCS 5 tx bitrate: 26.0 MBit/s MCS 3 tx bitrate: 57.8 MBit/s MCS 5 short GI tx bitrate: 52.0 MBit/s MCS 5 tx bitrate: 39.0 MBit/s MCS 4 tx bitrate: 39.0 MBit/s MCS 4 tx bitrate: 19.5 MBit/s MCS 2 tx bitrate: 39.0 MBit/s MCS 4 tx bitrate: 19.5 MBit/s MCS 2 tx bitrate: 28.9 MBit/s MCS 3 short GI tx bitrate: 28.9 MBit/s MCS 3 short GI tx bitrate: 6.5 MBit/s MCS 0 tx bitrate: 39.0 MBit/s MCS 4 tx bitrate: 58.5 MBit/s MCS 6 tx bitrate: 72.2 MBit/s MCS 7 short GI tx bitrate: 26.0 MBit/s MCS 3 tx bitrate: 39.0 MBit/s MCS 4 tx bitrate: 43.3 MBit/s MCS 4 short GI tx bitrate: 43.3 MBit/s MCS 4 short GI tx bitrate: 6.5 MBit/s MCS 0 tx bitrate: 28.9 MBit/s MCS 3 short GI tx bitrate: 1.0 MBit/s tx bitrate: 6.5 MBit/s MCS 0 tx bitrate: 65.0 MBit/s MCS 7 tx bitrate: 39.0 MBit/s MCS 4 tx bitrate: 26.0 MBit/s MCS 3 tx bitrate: 1.0 MBit/s ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 19:10 ` Pete Heist @ 2018-04-24 21:32 ` Toke Høiland-Jørgensen 2018-04-25 6:05 ` Pete Heist 2018-04-26 0:38 ` David Lang 0 siblings, 2 replies; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-04-24 21:32 UTC (permalink / raw) To: Pete Heist; +Cc: make-wifi-fast Pete Heist <pete@eventide.io> writes: >> On Apr 24, 2018, at 4:34 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >> >> Not sure. You do get HOL blocking while a packet is retried, basically. >> And ath9k will keep trying way after it should have given up (up to 30 >> retries per packet). If you combine this with a lot of backoff for each >> transmission (which is also quite likely in a very congested setting), >> and I suppose it might be possible… > > So that means everyone else waits while a packet is sent and re-sent > to a station with a weak signal, for example? I see how that would > wreck the latency pretty quick with the number of stations connected, > plus channel contention with another repeater. It may be a much bigger > factor than bloat. Yup, exactly... > The physical situation is that there’s an AP on the roof of one cabin, > and it’s surrounded by about 8 occupied cabins, each with 4 kids, at > least one of which is probably curled up inside using Instagram or > YouTube. Signals and tx bitrates of the stations are below, for > interest. What I’ll try to find out is if it’s the activities of one > or two devices that affect things the most, as it appears that mean > latency can suddenly go down to < 10ms and up to > 200ms because of > “something"... Yeah, with those signal rates, there's going to be quite some pause when the slow stations try to transmit something... :/ -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 21:32 ` Toke Høiland-Jørgensen @ 2018-04-25 6:05 ` Pete Heist 2018-04-25 6:36 ` Sebastian Moeller 2018-04-26 0:41 ` David Lang 2018-04-26 0:38 ` David Lang 1 sibling, 2 replies; 56+ messages in thread From: Pete Heist @ 2018-04-25 6:05 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: make-wifi-fast > On Apr 24, 2018, at 11:32 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Yeah, with those signal rates, there's going to be quite some pause when > the slow stations try to transmit something... :/ So then, when would bloat actually become a problem on this hardware? I suppose you would need well-connected stations (maybe even just one or two) with multiple competing flows. And for the AP to not be one of two single channel repeaters with poorly connected stations, where contention becomes a problem sooner than bloat. For starters, I’m going to try to hang one of the two repeaters on the same channel off a separate gateway… ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-25 6:05 ` Pete Heist @ 2018-04-25 6:36 ` Sebastian Moeller 2018-04-25 17:17 ` Pete Heist 2018-04-26 0:41 ` David Lang 1 sibling, 1 reply; 56+ messages in thread From: Sebastian Moeller @ 2018-04-25 6:36 UTC (permalink / raw) To: make-wifi-fast, Pete Heist, Toke Høiland-Jørgensen [-- Attachment #1: Type: text/plain, Size: 1158 bytes --] Hi Pete, Silly question, are the cabins by chance behind a common breaker? Then maybe you could switch the 'backhaul' to Powerline tech? On April 25, 2018 8:05:34 AM GMT+02:00, Pete Heist <pete@eventide.io> wrote: > >> On Apr 24, 2018, at 11:32 PM, Toke Høiland-Jørgensen <toke@toke.dk> >wrote: >> >> Yeah, with those signal rates, there's going to be quite some pause >when >> the slow stations try to transmit something... :/ > >So then, when would bloat actually become a problem on this hardware? I >suppose you would need well-connected stations (maybe even just one or >two) with multiple competing flows. And for the AP to not be one of two >single channel repeaters with poorly connected stations, where >contention becomes a problem sooner than bloat. > >For starters, I’m going to try to hang one of the two repeaters on the >same channel off a separate gateway… > >_______________________________________________ >Make-wifi-fast mailing list >Make-wifi-fast@lists.bufferbloat.net >https://lists.bufferbloat.net/listinfo/make-wifi-fast -- Sent from my Android device with K-9 Mail. Please excuse my brevity. [-- Attachment #2: Type: text/html, Size: 1588 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-25 6:36 ` Sebastian Moeller @ 2018-04-25 17:17 ` Pete Heist 0 siblings, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-04-25 17:17 UTC (permalink / raw) To: Sebastian Moeller; +Cc: make-wifi-fast > On Apr 25, 2018, at 8:36 AM, Sebastian Moeller <moeller0@gmx.de> wrote: > > Silly question, are the cabins by chance behind a common breaker? Then maybe you could switch the 'backhaul' to Powerline tech? Not silly at all! I tried it once before from our main router to the closest cabin and couldn’t manage to get any signal light at all on the PLC adapter. I’m glad you brought this up though, because I’m inspired to find a wiring diagram of the camp and try again. There are two buildings with cabled Internet and it’s true, if I can get it to work _anywhere_ it might help. Then the next question is what kind of range/speed I can get out of it. I expect it will be about a 100 meter run. I have about a 30 meter run on my adapter at home, and I think it does around 80Mbit, with an unloaded ping time of around 8ms, actually... ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-25 6:05 ` Pete Heist 2018-04-25 6:36 ` Sebastian Moeller @ 2018-04-26 0:41 ` David Lang 2018-04-26 19:40 ` Pete Heist 1 sibling, 1 reply; 56+ messages in thread From: David Lang @ 2018-04-26 0:41 UTC (permalink / raw) To: Pete Heist; +Cc: Toke Høiland-Jørgensen, make-wifi-fast [-- Attachment #1: Type: TEXT/PLAIN, Size: 685 bytes --] On Wed, 25 Apr 2018, Pete Heist wrote: >> On Apr 24, 2018, at 11:32 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >> >> Yeah, with those signal rates, there's going to be quite some pause when >> the slow stations try to transmit something... :/ > > So then, when would bloat actually become a problem on this hardware? In this case, it's unlikely to be a real problem. Bufferbloat happens when you have a high bandwidth connection on one side and a low bandwidth connection on the other side of a router (the bigger the difference, the more likely you are to run into trouble) In this case, the wifi issues overwelm everything else, and bufferbloat just isn't a noticable ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 0:41 ` David Lang @ 2018-04-26 19:40 ` Pete Heist 0 siblings, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-04-26 19:40 UTC (permalink / raw) To: David Lang; +Cc: Toke Høiland-Jørgensen, make-wifi-fast > On Apr 26, 2018, at 2:41 AM, David Lang <david@lang.hm> wrote: > > On Wed, 25 Apr 2018, Pete Heist wrote: > >> So then, when would bloat actually become a problem on this hardware? > > Bufferbloat happens when you have a high bandwidth connection on one side and a low bandwidth connection on the other side of a router (the bigger the difference, the more likely you are to run into trouble) That was more of a rhetorical question- kind of jerky really. :) I was running a number of flent tests on this hardware with two well connected clients showing how the new ath9k driver improves latencies in that case rather dramatically. I’m not familiar yet with the point at which contention becomes more of a problem, but I hope to find out… ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 21:32 ` Toke Høiland-Jørgensen 2018-04-25 6:05 ` Pete Heist @ 2018-04-26 0:38 ` David Lang 2018-04-26 21:41 ` Pete Heist 1 sibling, 1 reply; 56+ messages in thread From: David Lang @ 2018-04-26 0:38 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: Pete Heist, make-wifi-fast [-- Attachment #1: Type: TEXT/PLAIN, Size: 1320 bytes --] On Tue, 24 Apr 2018, Toke Høiland-Jørgensen wrote: > Pete Heist <pete@eventide.io> writes: > >>> On Apr 24, 2018, at 4:34 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >>> >>> Not sure. You do get HOL blocking while a packet is retried, basically. >>> And ath9k will keep trying way after it should have given up (up to 30 >>> retries per packet). If you combine this with a lot of backoff for each >>> transmission (which is also quite likely in a very congested setting), >>> and I suppose it might be possible… >> >> So that means everyone else waits while a packet is sent and re-sent >> to a station with a weak signal, for example? I see how that would >> wreck the latency pretty quick with the number of stations connected, >> plus channel contention with another repeater. It may be a much bigger >> factor than bloat. > > Yup, exactly... https://www.usenix.org/publications/login/april-2013-volume-38-number-2/wireless-means-radio https://www.usenix.org/conference/lisa12/technical-sessions/presentation/lang_david_wireless http://lang.hm/talks/topics/Wireless/Cascadia_2012/ (apologies for never cleaning this up to one set of good audio/video and slides) can you make a map of how well each AP hears every other AP? (iw scan on each device in turn to report the signal strength of the others) ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 0:38 ` David Lang @ 2018-04-26 21:41 ` Pete Heist 2018-04-26 21:44 ` Sebastian Moeller 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-26 21:41 UTC (permalink / raw) To: David Lang; +Cc: Toke Høiland-Jørgensen, make-wifi-fast [-- Attachment #1: Type: text/plain, Size: 1136 bytes --] > On Apr 26, 2018, at 2:38 AM, David Lang <david@lang.hm> wrote: > > can you make a map of how well each AP hears every other AP? (iw scan on each device in turn to report the signal strength of the others) This should update once per day for a while: http://www.drhleny.cz/smokeping/ A textual map is below. I use channels 1, 6 and 11 to segment things. My biggest problem is on channel 1, where I have two repeaters hanging off of one gateway. First I’ll try to get cabins 12 and 20 on separate gateways / channels, which will be a challenge with the line-of-sight situation and only three non-overlapping channels. If it isn’t enough, I'll see if I can get anything else cabled or backhauled with 5 GHz. Thanks for the ideas! CH 1: ServiceEast sees: - Cabin12 -69 - Cabin20 -62 Cabin12 sees: - ServiceEast -66 (gateway) - Cabin20 -72 Cabin20 sees: - ServiceEast -62 (gateway) - Cabin12 -72 CH 6: ServiceWest sees: - Cabin28 -50 Cabin28 sees: - ServiceWest -49 (gateway) Office sees nothing CH 11: Reception sees: - CabinA1 -68 CabinA1 sees: - Reception -68 (gateway) [-- Attachment #2: Type: text/html, Size: 3201 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 21:41 ` Pete Heist @ 2018-04-26 21:44 ` Sebastian Moeller 2018-04-26 21:56 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Sebastian Moeller @ 2018-04-26 21:44 UTC (permalink / raw) To: Pete Heist; +Cc: David Lang, make-wifi-fast Hi Pete, > On Apr 26, 2018, at 23:41, Pete Heist <pete@eventide.io> wrote: > > >> On Apr 26, 2018, at 2:38 AM, David Lang <david@lang.hm> wrote: >> >> can you make a map of how well each AP hears every other AP? (iw scan on each device in turn to report the signal strength of the others) > > This should update once per day for a while: http://www.drhleny.cz/smokeping/ > > A textual map is below. I use channels 1, 6 and 11 to segment things. Why? I believe in Europe 1, 5, 9, 13 will allow one more independent channel (see https://en.wikipedia.org/wiki/List_of_WLAN_channels). I believe if your site is remote enough that there is nobody else around you might actually be able to get away with 1, 5, 9, 13 ;) > My biggest problem is on channel 1, where I have two repeaters hanging off of one gateway. > > First I’ll try to get cabins 12 and 20 on separate gateways / channels, which will be a challenge with the line-of-sight situation and only three non-overlapping channels. If it isn’t enough, I'll see if I can get anything else cabled or backhauled with 5 GHz. Thanks for the ideas! > > CH 1: > > ServiceEast sees: > - Cabin12 -69 > - Cabin20 -62 > > Cabin12 sees: > - ServiceEast -66 (gateway) > - Cabin20 -72 > > Cabin20 sees: > - ServiceEast -62 (gateway) > - Cabin12 -72 > > CH 6: > > ServiceWest sees: > - Cabin28 -50 > > Cabin28 sees: > - ServiceWest -49 (gateway) > > Office sees nothing > > CH 11: > > Reception sees: > - CabinA1 -68 > > CabinA1 sees: > - Reception -68 (gateway) > > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 21:44 ` Sebastian Moeller @ 2018-04-26 21:56 ` Pete Heist 2018-04-26 22:04 ` David Lang 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-26 21:56 UTC (permalink / raw) To: Sebastian Moeller; +Cc: David Lang, make-wifi-fast > On Apr 26, 2018, at 11:44 PM, Sebastian Moeller <moeller0@gmx.de> wrote: > > Hi Pete, > >> A textual map is below. I use channels 1, 6 and 11 to segment things. > > Why? I believe in Europe 1, 5, 9, 13 will allow one more independent channel (see https://en.wikipedia.org/wiki/List_of_WLAN_channels). I believe if your site is remote enough that there is nobody else around you might actually be able to get away with 1, 5, 9, 13 ;) It’s remote enough. I could try it, but some of the equipment I have from the US isn't happy above 11, and we have international guests that might run into problems. There are also minor overlaps between this set of channels, but that’s probably a smaller issue than compatibility… :) ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 21:56 ` Pete Heist @ 2018-04-26 22:04 ` David Lang 2018-04-26 22:47 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: David Lang @ 2018-04-26 22:04 UTC (permalink / raw) To: Pete Heist; +Cc: Sebastian Moeller, make-wifi-fast [-- Attachment #1: Type: TEXT/PLAIN, Size: 711 bytes --] On Thu, 26 Apr 2018, Pete Heist wrote: >> On Apr 26, 2018, at 11:44 PM, Sebastian Moeller <moeller0@gmx.de> wrote: >> >> Hi Pete, >> >>> A textual map is below. I use channels 1, 6 and 11 to segment things. >> >> Why? I believe in Europe 1, 5, 9, 13 will allow one more independent channel (see https://en.wikipedia.org/wiki/List_of_WLAN_channels). I believe if your site is remote enough that there is nobody else around you might actually be able to get away with 1, 5, 9, 13 ;) > > It’s remote enough. I could try it, but some of the equipment I have from the US isn't happy above 11 If you are running OpenWRT, then you should be able to set the country code and now worry where the equipment was from. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 22:04 ` David Lang @ 2018-04-26 22:47 ` Pete Heist 2018-04-27 10:15 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-26 22:47 UTC (permalink / raw) To: David Lang; +Cc: Sebastian Moeller, make-wifi-fast > On Apr 27, 2018, at 12:04 AM, David Lang <david@lang.hm> wrote: >> >> It’s remote enough. I could try it, but some of the equipment I have from the US isn't happy above 11 > > If you are running OpenWRT, then you should be able to set the country code and now worry where the equipment was from. Theoretically, but for starters my older 2007 MacBook Pro doesn’t seem to want to use 13, whereas my 2011 is fine with it. I don’t know the country code selection algorithm for all devices, but some may use the country code from the first AP they see, leading to anecdotes like this: http://isnowhere.com/mac-wifi-county-code-fail-channel-13/ Some people enable personal hotspots on their phone, which if they’re a traveller, could potentially cause someone else’s region to change. It’s still a good idea, and maybe I’ll give it another try now that it’s 2018 and this should work(!), but so far I’ve wanted to avoid the possibility of such complications. But I first have to figure out where to physically put a third gateway to split cabins 12 and 20- without that I won’t be able to get them on independent channels no matter how many I have available… :) ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-26 22:47 ` Pete Heist @ 2018-04-27 10:15 ` Toke Høiland-Jørgensen 2018-04-27 10:32 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-04-27 10:15 UTC (permalink / raw) To: Pete Heist, David Lang; +Cc: make-wifi-fast Pete Heist <pete@eventide.io> writes: >> On Apr 27, 2018, at 12:04 AM, David Lang <david@lang.hm> wrote: >>> >>> It’s remote enough. I could try it, but some of the equipment I have from the US isn't happy above 11 >> >> If you are running OpenWRT, then you should be able to set the country code and now worry where the equipment was from. > > Theoretically, but for starters my older 2007 MacBook Pro doesn’t seem > to want to use 13, whereas my 2011 is fine with it. I don’t know the > country code selection algorithm for all devices, but some may use the > country code from the first AP they see, leading to anecdotes like > this: http://isnowhere.com/mac-wifi-county-code-fail-channel-13/ Well, you could also use channel 13 as a backhaul channel only? :) -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-27 10:15 ` Toke Høiland-Jørgensen @ 2018-04-27 10:32 ` Pete Heist 0 siblings, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-04-27 10:32 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: David Lang, make-wifi-fast > On Apr 27, 2018, at 12:15 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Well, you could also use channel 13 as a backhaul channel only? :) True that, but if I go for backhaul gear it will likely be 5 GHz. It’s true that I might have some older 2.4 GHz devices I could repurpose as backhaul, but I still don’t know the practical effect of 2 MHz of overlap when doing 1 5 9 13, so I'll see if I can manage without having to find out… :) <Channel> <Start Frequency> <Mid Frequency> <End Frequency> 1 2401 2412 2423 5 2421 2432 2443 9 2441 2452 2463 13 2461 2472 2483 ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 13:51 ` Toke Høiland-Jørgensen 2018-04-24 14:09 ` Pete Heist @ 2018-04-26 0:35 ` David Lang 1 sibling, 0 replies; 56+ messages in thread From: David Lang @ 2018-04-26 0:35 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: Pete Heist, make-wifi-fast [-- Attachment #1: Type: TEXT/PLAIN, Size: 783 bytes --] On Tue, 24 Apr 2018, Toke Høiland-Jørgensen wrote: >> I wish I could cable everything, but it isn’t physically practical. >> The next possibility is dual channel APs, or separate backhaul links, >> all costing something... > > Yeah, a separate backhaul on a different channel would cut you > contention in half, basically. Right now, each transmission has to > occupy the channel twice... worse than that, it causes grief on other stations further away that can hear one station and not the other. Splitting things so that user access is on 2.4G and relaying between APs is on 5G will help quite a bit. If you can add some directional antennas on 5G, it will help as well Even without being able to wire everything, if you have any that you can wire together it will help ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 8:33 [Make-wifi-fast] mesh deployment with ath9k driver changes Pete Heist 2018-04-24 11:54 ` Toke Høiland-Jørgensen @ 2018-04-27 11:42 ` Valent Turkovic 2018-04-27 11:50 ` Pete Heist 2018-04-27 11:47 ` Valent Turkovic 2 siblings, 1 reply; 56+ messages in thread From: Valent Turkovic @ 2018-04-27 11:42 UTC (permalink / raw) To: Pete Heist; +Cc: make-wifi-fast Hi Pete, On Tue, Apr 24, 2018 at 10:33 AM, Pete Heist <pete@eventide.io> wrote: > I have a 7 node (3 of which are repeaters) mesh deployed at a campground using Open Mesh’s 6.4 beta (with LEDE and the ath9k changes). I set up an internal SmokePing instance with each AP as a target, and the first guests of the season are arriving (school kids, the best testers). The setup for our three repeaters is: Which OpenMesh devices are you using? I know they have few different models (https://www.openmesh.com/products/wifi). I would highly suggest to use dual radio devices. Does OpenMesh software allow you to choose on which radio mesh interface is running and on which clients can connect to? Cheers, Valent. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-27 11:42 ` Valent Turkovic @ 2018-04-27 11:50 ` Pete Heist 2018-04-27 11:59 ` Valent Turkovic 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-04-27 11:50 UTC (permalink / raw) To: Valent Turkovic; +Cc: make-wifi-fast > On Apr 27, 2018, at 1:42 PM, Valent Turkovic <valent@otvorenamreza.org> wrote: > > Which OpenMesh devices are you using? I know they have few different > models (https://www.openmesh.com/products/wifi). > I would highly suggest to use dual radio devices. Does OpenMesh > software allow you to choose on which radio mesh interface is running > and on which clients can connect to? 8x OM2P-HS, single radio devices, mainly due to cost in this situation. I don’t know for sure if the radio used for mesh can be selected because I don’t have a dual radio device to test, but I don’t see any evidence that you can configure it it the management interface… ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-27 11:50 ` Pete Heist @ 2018-04-27 11:59 ` Valent Turkovic 2018-04-27 12:17 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Valent Turkovic @ 2018-04-27 11:59 UTC (permalink / raw) To: Pete Heist; +Cc: make-wifi-fast On Fri, Apr 27, 2018 at 1:50 PM, Pete Heist <pete@eventide.io> wrote: > >> On Apr 27, 2018, at 1:42 PM, Valent Turkovic <valent@otvorenamreza.org> wrote: >> >> Which OpenMesh devices are you using? I know they have few different >> models (https://www.openmesh.com/products/wifi). >> I would highly suggest to use dual radio devices. Does OpenMesh >> software allow you to choose on which radio mesh interface is running >> and on which clients can connect to? > > 8x OM2P-HS, single radio devices, mainly due to cost in this situation. I don’t know for sure if the radio used for mesh can be selected because I don’t have a dual radio device to test, but I don’t see any evidence that you can configure it it the management interface… > Unfortunately you shot yourself in the foot with wrong choice of under-powered devices :( You need at least two radio devices, and even then is a question if that would work. Other thing you could try on a budget is to build your own solution from off-the-shelf components, but you need to invest much more time and effort into this. Some rules you need to follow to get an working network: - avoid using using omni-directional antennas if humanly possible (take a look at TP-LINK CPE210/510 and Ubiquiti Nano M2/M5 devices) - use dedicated radios for mesh (this means using dual radio devices if possible or two cheap devices instead of one more expensive) - cut tree branches that are obstructing LOS, move devices higher or lower to achieve clear LOS - run outdoor cable trough ground instead wifi if location is close enough Hope things work out! ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-27 11:59 ` Valent Turkovic @ 2018-04-27 12:17 ` Pete Heist 0 siblings, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-04-27 12:17 UTC (permalink / raw) To: Valent Turkovic; +Cc: make-wifi-fast > On Apr 27, 2018, at 1:59 PM, Valent Turkovic <valent@otvorenamreza.org> wrote: > > Some rules you need to follow to get an working network: > - avoid using using omni-directional antennas if humanly possible > (take a look at TP-LINK CPE210/510 and Ubiquiti Nano M2/M5 devices) > - use dedicated radios for mesh (this means using dual radio devices > if possible or two cheap devices instead of one more expensive) > - cut tree branches that are obstructing LOS, move devices higher or > lower to achieve clear LOS > - run outdoor cable trough ground instead wifi if location is close enough > > Hope things work out! Judging from our Cabin 28 repeater (which so far works “well enough”), I believe that just splitting the channels so there’s only one repeater per gateway will help a lot. Unfortunately cutting tree branches and trenching aren’t practical here, nor is much expense, so I’ll just take it step by step and hopefully enjoy how it improves. Thanks for the ideas… :) ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-24 8:33 [Make-wifi-fast] mesh deployment with ath9k driver changes Pete Heist 2018-04-24 11:54 ` Toke Høiland-Jørgensen 2018-04-27 11:42 ` Valent Turkovic @ 2018-04-27 11:47 ` Valent Turkovic 2018-04-27 12:00 ` Pete Heist 2 siblings, 1 reply; 56+ messages in thread From: Valent Turkovic @ 2018-04-27 11:47 UTC (permalink / raw) To: Pete Heist; +Cc: make-wifi-fast Hi Pete, I'm betting that you are running into a classic issue of not enough airtime to do all you want, and then you get into death-spiral of frame re-transmission (symptom you see are high ping times) and then whole network collapses. Get someone who has wifi analysis tools to take a snapshot of your airtime and watch for number of re-transmitted frames (or get that info from ath9 driver statistics). If you see more than 5% of frame re-transmissions you are having noticeable issues, if you see more than 15% of frame re-transmissions your network is probably not usable. Hope this helps, Valent. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-04-27 11:47 ` Valent Turkovic @ 2018-04-27 12:00 ` Pete Heist 0 siblings, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-04-27 12:00 UTC (permalink / raw) To: Valent Turkovic; +Cc: make-wifi-fast > On Apr 27, 2018, at 1:47 PM, Valent Turkovic <valent@otvorenamreza.org> wrote: > If you see more than 5% of frame > re-transmissions you are having noticeable issues, if you see more > than 15% of frame re-transmissions your network is probably not > usable. An update from the ath9k stats Toke pointed out earlier in the thread on our worst AP (one of two repeaters off one gateway, NLOS): root@Cabin_12:~# cat /sys/kernel/debug/ieee80211/phy0/ath9k/xmit BE BK VI VO AMPDUs Completed: 6801646 68748 3000 0 AMPDUs Retried: 1011622 8053 444 0 1011622 / 6801646 = 0.149 And our best AP (one repeater off one gateway, LOS): root@Cabin_28:~# cat /sys/kernel/debug/ieee80211/phy0/ath9k/xmit BE BK VI VO AMPDUs Completed: 23947083 118834 31390 0 AMPDUs Retried: 1166349 11402 130 0 1166349 / 23947083 = 0.049 We weren’t entirely sure of the exact meaning of the stats, but 0.049 on one and 0.149 on the other are an indication, and it’s clear what I need to do… :) ^ permalink raw reply [flat|nested] 56+ messages in thread
* [Make-wifi-fast] mesh deployment with ath9k driver changes @ 2018-05-19 16:03 bkil 2018-05-20 18:56 ` Pete Heist 2018-05-31 0:52 ` David Lang 0 siblings, 2 replies; 56+ messages in thread From: bkil @ 2018-05-19 16:03 UTC (permalink / raw) To: make-wifi-fast In reply to this thread: https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html Sorry for the late response, although I can see from yesterday's SmokePing plots that the issue still prevails. 1. You should definitely not allow rates as low as 1Mb/s considering: * plots of signal vs. rate, * topology of closely packed cabins; * mostly static, noise-free camp ground. Almost all of your clients were able to link with >20Mb/s even at 70-80dBm. Those below were probably just idling. I'd limit the network to 802.11g/n-only, and would even consider disabling all rates below 12Mb/s. This should help both in working around imperfect schedulers and clients roaming. You could double check the coverage afterwards with a simple site survey. You may also test whether disassoc_low_ack makes things more stable around the edge. Despite the recently introduced air fairness patches, most other points are still valid from these earlier articles due to pathological schedulers: http://divdyn.com/disable-lower-legacy-data-rates/ https://blogs.cisco.com/wireless/wi-fi-taxes-digging-into-the-802-11b-penalty https://www.networkworld.com/article/2230601/cisco-subnet/dropping-legacy-802-11-support-from-your-infrastructure--part-2-.html Disabling 802.11/b modulation also brings the added benefit of occupying less bandwidth (16.5-20 OFDM vs. 22 Barker/CCK), enabling the previously mentioned channel spacing of 1-5-9-13. https://wifinigel.blogspot.hu/2013/02/adjacent-channel-interference.html 2. Enable client isolation to mitigate broadcast storms. 3. If you still couldn't split the two cells that work on the same channel, at least try to reduce their TX power to reduce their range of interference. This may or may not improve things overall due to hidden nodes, though. We'd definitely love to hear from you whether any of these worked or made things worse. Happy camping! ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-05-19 16:03 bkil @ 2018-05-20 18:56 ` Pete Heist 2018-05-31 0:52 ` David Lang 1 sibling, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-05-20 18:56 UTC (permalink / raw) To: bkil; +Cc: make-wifi-fast Hi, thanks for the tips…most everything seems to be already set similarly in Open Mesh’s config except: - the disabling of lower rates- interesting idea and maybe the most consequential, I’ll see if I can do this in one of OM’s custom.sh scripts, otherwise it’s easy to do in plain OpenWrt - client isolation isn’t supported when bridging to a VLAN, which I’m doing at the moment It’s taking some convincing, but I have a rough plan to dig a minimal trench for a new cabled gateway to split cabins 12 and 20, which is the primary issue. If ground isn’t broken soon, it might be me at the shovel. :) Most definitely I’ll report back as I want to do some more testing on the ath9k changes, once this physical issue is taken care of... > On May 19, 2018, at 6:03 PM, bkil <bkil.hu+Aq@gmail.com> wrote: > > In reply to this thread: > https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html > > Sorry for the late response, although I can see from yesterday's > SmokePing plots that the issue still prevails. > > 1. > You should definitely not allow rates as low as 1Mb/s considering: > * plots of signal vs. rate, > * topology of closely packed cabins; > * mostly static, noise-free camp ground. > > Almost all of your clients were able to link with >20Mb/s even at > 70-80dBm. Those below were probably just idling. I'd limit the network > to 802.11g/n-only, and would even consider disabling all rates below > 12Mb/s. > > This should help both in working around imperfect schedulers and > clients roaming. > > You could double check the coverage afterwards with a simple site > survey. You may also test whether disassoc_low_ack makes things more > stable around the edge. > > Despite the recently introduced air fairness patches, most other > points are still valid from these earlier articles due to pathological > schedulers: > http://divdyn.com/disable-lower-legacy-data-rates/ > https://blogs.cisco.com/wireless/wi-fi-taxes-digging-into-the-802-11b-penalty > https://www.networkworld.com/article/2230601/cisco-subnet/dropping-legacy-802-11-support-from-your-infrastructure--part-2-.html > > Disabling 802.11/b modulation also brings the added benefit of > occupying less bandwidth (16.5-20 OFDM vs. 22 Barker/CCK), enabling > the previously mentioned channel spacing of 1-5-9-13. > > https://wifinigel.blogspot.hu/2013/02/adjacent-channel-interference.html > > 2. > Enable client isolation to mitigate broadcast storms. > > 3. > If you still couldn't split the two cells that work on the same > channel, at least try to reduce their TX power to reduce their range > of interference. This may or may not improve things overall due to > hidden nodes, though. > > We'd definitely love to hear from you whether any of these worked or > made things worse. Happy camping! ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-05-19 16:03 bkil 2018-05-20 18:56 ` Pete Heist @ 2018-05-31 0:52 ` David Lang 2018-06-08 9:37 ` Pete Heist 1 sibling, 1 reply; 56+ messages in thread From: David Lang @ 2018-05-31 0:52 UTC (permalink / raw) To: bkil; +Cc: make-wifi-fast On Sat, 19 May 2018, bkil wrote: > In reply to this thread: > https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html > > Sorry for the late response, although I can see from yesterday's > SmokePing plots that the issue still prevails. > > 1. > You should definitely not allow rates as low as 1Mb/s considering: > * plots of signal vs. rate, > * topology of closely packed cabins; > * mostly static, noise-free camp ground. > > Almost all of your clients were able to link with >20Mb/s even at > 70-80dBm. Those below were probably just idling. I'd limit the network > to 802.11g/n-only, and would even consider disabling all rates below > 12Mb/s. I have been wanting to do this on my APs for Scale (wndr3800 APs with ath9k chipsets) and have been unable to find how to do this on these chipsets. David Lang ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-05-31 0:52 ` David Lang @ 2018-06-08 9:37 ` Pete Heist 2018-06-09 15:32 ` bkil 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-06-08 9:37 UTC (permalink / raw) To: David Lang; +Cc: bkil, make-wifi-fast [-- Attachment #1: Type: text/plain, Size: 1523 bytes --] > On May 31, 2018, at 2:52 AM, David Lang <david@lang.hm> wrote: > > On Sat, 19 May 2018, bkil wrote: > >> In reply to this thread: >> https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html >> >> Sorry for the late response, although I can see from yesterday's >> SmokePing plots that the issue still prevails. >> >> 1. >> You should definitely not allow rates as low as 1Mb/s considering: >> * plots of signal vs. rate, >> * topology of closely packed cabins; >> * mostly static, noise-free camp ground. >> >> Almost all of your clients were able to link with >20Mb/s even at >> 70-80dBm. Those below were probably just idling. I'd limit the network >> to 802.11g/n-only, and would even consider disabling all rates below >> 12Mb/s. > > I have been wanting to do this on my APs for Scale (wndr3800 APs with ath9k chipsets) and have been unable to find how to do this on these chipsets. I also didn’t manage to disable specific MCS rates: - supported_rates and basic_rate in /etc/config/wireless seem to only affect legacy rates, not ht-mcs-2.4 rates. - "iw dev wlan0 set bitrates” only sets the transmit rate for the AP But what I did do at 10:45am today, June 8, was disable 802.11b rates with: uci set wireless.radio0.legacy_rates=‘0' uci commit reboot So at least the minimum rate should be limited to 6Mbit. We’ll see if this helps in the next few days as the camp fills, and if I hear any complaints: https://www.drhleny.cz/smokeping/ [-- Attachment #2: Type: text/html, Size: 4858 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-08 9:37 ` Pete Heist @ 2018-06-09 15:32 ` bkil 2018-06-13 13:07 ` Pete Heist [not found] ` <CADuVhRWL2aVjzjfLHg1nPFa8Ae-hWrGrE7Wga4eUKon3oqoTXA@mail.gmail.com> 0 siblings, 2 replies; 56+ messages in thread From: bkil @ 2018-06-09 15:32 UTC (permalink / raw) To: Pete Heist; +Cc: David Lang, make-wifi-fast Hello, That's nice. You would probably get most of the benefits already if you could manage to prune B/G rates. You're right in that pruning the N rates is a different question. I think it is good enough already to allow G >= 12Mb/s and N >= MCS-0. Using N rates by itself would also enable all the benefits of LDPC, STBC, aggregation, (beam forming) and other goodies your chipsets could offer, thereby freeing up your spectrum some more. legacy_rates=0 seems to be an alias for enabling all g-rates and disabling b-rates: https://github.com/lede-project/source/blob/69f544937f8498e856690f9809a016f0d7f5f68b/package/network/services/hostapd/files/hostapd.sh#L110 So this should be the way to go in your device section: option hwmode g option htmode HT20 # probably already set option distance 450 # minimal coverage option beacon_int 200 # power/bandwidth saving option basic_rate '12000 24000' option supported_rates '12000 18000 24000 36000 48000 54000' ... (some other options automatically generated) I also add these to the interface section: option isolate 1 # intra-BSS option dtim_period 5 # cheap power saving Although I've seen a site where they mention that pruning N rates (and announcing a strange restricted set in the beacons) could cause issues with some drivers, they didn't say which specific drivers or devices had this issue. Maybe it was just a rule of thumb of theirs. Anyway, wpa_supplicant has a setting for this as well (ht_mcs): https://github.com/helmut-jacob/hostapd/blob/a35e34106723eb0b23df9114364d6fbc46fffab6/wpa_supplicant/wpa_supplicant.conf#L893 https://github.com/helmut-jacob/hostapd/blob/a35e34106723eb0b23df9114364d6fbc46fffab6/wpa_supplicant/config_ssid.h#L551 So some higher level patching might also be possible to enable hostapd doing the same. Although the effect is not entirely the same, it is already possible to instruct the AP TX to use only the higher rates (except for lookaround) by running these after wifi is up (in conjunction with the config above): iw dev wlan0 set bitrates # clear mask iw dev wlan0 set bitrates legacy-2.4 12 18 24 36 48 54 ht-mcs-2.4 1 2 3 4 5 6 7 # ... up to max MCS Regarding isolation, correct me if I'm wrong, but ap_isolate in hostapd should be at another level and is enabled default, so it is also a good idea to disable it: # Client isolation can be used to prevent low-level bridging of frames between # associated stations in the BSS. By default, this bridging is allowed. # https://w1.fi/cgit/hostap/plain/hostapd/hostapd.conf This came up on the forums as well recently: https://forum.lede-project.org/t/how-to-prevent-guest-network-clients-to-communicate-with-each-other/14831 Could you please verify that when you connect two devices to the same AP, they can't reach each other? (firewalls put aside) Even disabling local broadcasts could help a bit if you say that complete isolation is not feasible. Not sure if the said campers have already arrived, but SmokePing stats for today had greatly improved compared to the previous days. We'll see after a few more days, but it's good to know that you still have some more options to tweak. It would be great if OpenWrt shipped with 802.11b disabled by default and let the user decide whether she wants to enable it for legacy users. I haven't encountered a 802.11b-only device in many years, and they say that >95% of clients are already N/AC capable. Regards On Fri, Jun 8, 2018 at 11:37 AM, Pete Heist <pete@eventide.io> wrote: > > On May 31, 2018, at 2:52 AM, David Lang <david@lang.hm> wrote: > > On Sat, 19 May 2018, bkil wrote: > > In reply to this thread: > https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html > > Sorry for the late response, although I can see from yesterday's > SmokePing plots that the issue still prevails. > > 1. > You should definitely not allow rates as low as 1Mb/s considering: > * plots of signal vs. rate, > * topology of closely packed cabins; > * mostly static, noise-free camp ground. > > Almost all of your clients were able to link with >20Mb/s even at > 70-80dBm. Those below were probably just idling. I'd limit the network > to 802.11g/n-only, and would even consider disabling all rates below > 12Mb/s. > > > I have been wanting to do this on my APs for Scale (wndr3800 APs with ath9k > chipsets) and have been unable to find how to do this on these chipsets. > > > I also didn’t manage to disable specific MCS rates: > - supported_rates and basic_rate in /etc/config/wireless seem to only affect > legacy rates, not ht-mcs-2.4 rates. > - "iw dev wlan0 set bitrates” only sets the transmit rate for the AP > > But what I did do at 10:45am today, June 8, was disable 802.11b rates with: > > uci set wireless.radio0.legacy_rates=‘0' > uci commit > reboot > > So at least the minimum rate should be limited to 6Mbit. We’ll see if this > helps in the next few days as the camp fills, and if I hear any complaints: > > https://www.drhleny.cz/smokeping/ > -- If you need an encryption key / ha kell titkosító kulcs: http://bkil.blogspot.hu/2014/08/public-key.html On Fri, Jun 8, 2018 at 11:37 AM, Pete Heist <pete@eventide.io> wrote: > > On May 31, 2018, at 2:52 AM, David Lang <david@lang.hm> wrote: > > On Sat, 19 May 2018, bkil wrote: > > In reply to this thread: > https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html > > Sorry for the late response, although I can see from yesterday's > SmokePing plots that the issue still prevails. > > 1. > You should definitely not allow rates as low as 1Mb/s considering: > * plots of signal vs. rate, > * topology of closely packed cabins; > * mostly static, noise-free camp ground. > > Almost all of your clients were able to link with >20Mb/s even at > 70-80dBm. Those below were probably just idling. I'd limit the network > to 802.11g/n-only, and would even consider disabling all rates below > 12Mb/s. > > > I have been wanting to do this on my APs for Scale (wndr3800 APs with ath9k > chipsets) and have been unable to find how to do this on these chipsets. > > > I also didn’t manage to disable specific MCS rates: > - supported_rates and basic_rate in /etc/config/wireless seem to only affect > legacy rates, not ht-mcs-2.4 rates. > - "iw dev wlan0 set bitrates” only sets the transmit rate for the AP > > But what I did do at 10:45am today, June 8, was disable 802.11b rates with: > > uci set wireless.radio0.legacy_rates=‘0' > uci commit > reboot > > So at least the minimum rate should be limited to 6Mbit. We’ll see if this > helps in the next few days as the camp fills, and if I hear any complaints: > > https://www.drhleny.cz/smokeping/ > ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-09 15:32 ` bkil @ 2018-06-13 13:07 ` Pete Heist 2018-06-13 13:24 ` Toke Høiland-Jørgensen [not found] ` <CADuVhRWL2aVjzjfLHg1nPFa8Ae-hWrGrE7Wga4eUKon3oqoTXA@mail.gmail.com> 1 sibling, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-06-13 13:07 UTC (permalink / raw) To: bkil; +Cc: make-wifi-fast Trying one thing at a time, as legacy_rates=‘1’ didn’t do anything noticeable, which is not surprising when we probably don’t have many ‘b' devices connecting. We hit 4.9s ping times this morning, good work. :) Even though isolation couldn’t be turned on in the admin interface (because it says it can’t do it when bridging to a VLAN, for some reason), I was able to enable isolation for both SSIDs and it still seems to work, so that’s the change as of now: wireless.ap0_1.isolate='1' wireless.ap0_2.isolate=‘1’ Before I could ping between clients, and now I can’t, so apparently isolation is doing what it should. The next test will be tonight. I’m still waiting for the digging / cabling project to happen, which is what I expect to bring the biggest benefit. Again, that will add a new cabled AP which will serve as a gateway for cabin 12 and split cabins 12 and 20. This should also improve the signal to 12 vastly, which is one of the most loaded APs and currently only has RSSI -71 / MCS 5 or 6 to its parent AP, now that the leaves are full on the trees. > On Jun 9, 2018, at 5:32 PM, bkil <bkil.hu+Aq@gmail.com> wrote: > > Hello, > > That's nice. You would probably get most of the benefits already if > you could manage to prune B/G rates. You're right in that pruning the > N rates is a different question. I think it is good enough already to > allow G >= 12Mb/s and N >= MCS-0. Using N rates by itself would also > enable all the benefits of LDPC, STBC, aggregation, (beam forming) and > other goodies your chipsets could offer, thereby freeing up your > spectrum some more. > > legacy_rates=0 seems to be an alias for enabling all g-rates and > disabling b-rates: > > https://github.com/lede-project/source/blob/69f544937f8498e856690f9809a016f0d7f5f68b/package/network/services/hostapd/files/hostapd.sh#L110 > > So this should be the way to go in your device section: > > option hwmode g > option htmode HT20 # probably already set > option distance 450 # minimal coverage > option beacon_int 200 # power/bandwidth saving > option basic_rate '12000 24000' > option supported_rates '12000 18000 24000 36000 48000 54000' > ... (some other options automatically generated) > > I also add these to the interface section: > > option isolate 1 # intra-BSS > option dtim_period 5 # cheap power saving > > Although I've seen a site where they mention that pruning N rates (and > announcing a strange restricted set in the beacons) could cause issues > with some drivers, they didn't say which specific drivers or devices > had this issue. Maybe it was just a rule of thumb of theirs. Anyway, > wpa_supplicant has a setting for this as well (ht_mcs): > > https://github.com/helmut-jacob/hostapd/blob/a35e34106723eb0b23df9114364d6fbc46fffab6/wpa_supplicant/wpa_supplicant.conf#L893 > > https://github.com/helmut-jacob/hostapd/blob/a35e34106723eb0b23df9114364d6fbc46fffab6/wpa_supplicant/config_ssid.h#L551 > > So some higher level patching might also be possible to enable hostapd > doing the same. > > Although the effect is not entirely the same, it is already possible > to instruct the AP TX to use only the higher rates (except for > lookaround) by running these after wifi is up (in conjunction with the > config above): > > iw dev wlan0 set bitrates # clear mask > > iw dev wlan0 set bitrates legacy-2.4 12 18 24 36 48 54 ht-mcs-2.4 1 2 > 3 4 5 6 7 # ... up to max MCS > > > Regarding isolation, correct me if I'm wrong, but ap_isolate in > hostapd should be at another level and is enabled default, so it is > also a good idea to disable it: > > # Client isolation can be used to prevent low-level bridging of frames between > # associated stations in the BSS. By default, this bridging is allowed. > # https://w1.fi/cgit/hostap/plain/hostapd/hostapd.conf > > This came up on the forums as well recently: > https://forum.lede-project.org/t/how-to-prevent-guest-network-clients-to-communicate-with-each-other/14831 > > Could you please verify that when you connect two devices to the same > AP, they can't reach each other? (firewalls put aside) Even disabling > local broadcasts could help a bit if you say that complete isolation > is not feasible. > > Not sure if the said campers have already arrived, but SmokePing stats > for today had greatly improved compared to the previous days. We'll > see after a few more days, but it's good to know that you still have > some more options to tweak. > > It would be great if OpenWrt shipped with 802.11b disabled by default > and let the user decide whether she wants to enable it for legacy > users. I haven't encountered a 802.11b-only device in many years, and > they say that >95% of clients are already N/AC capable. > > Regards > > On Fri, Jun 8, 2018 at 11:37 AM, Pete Heist <pete@eventide.io> wrote: >> >> On May 31, 2018, at 2:52 AM, David Lang <david@lang.hm> wrote: >> >> On Sat, 19 May 2018, bkil wrote: >> >> In reply to this thread: >> https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html >> >> Sorry for the late response, although I can see from yesterday's >> SmokePing plots that the issue still prevails. >> >> 1. >> You should definitely not allow rates as low as 1Mb/s considering: >> * plots of signal vs. rate, >> * topology of closely packed cabins; >> * mostly static, noise-free camp ground. >> >> Almost all of your clients were able to link with >20Mb/s even at >> 70-80dBm. Those below were probably just idling. I'd limit the network >> to 802.11g/n-only, and would even consider disabling all rates below >> 12Mb/s. >> >> >> I have been wanting to do this on my APs for Scale (wndr3800 APs with ath9k >> chipsets) and have been unable to find how to do this on these chipsets. >> >> >> I also didn’t manage to disable specific MCS rates: >> - supported_rates and basic_rate in /etc/config/wireless seem to only affect >> legacy rates, not ht-mcs-2.4 rates. >> - "iw dev wlan0 set bitrates” only sets the transmit rate for the AP >> >> But what I did do at 10:45am today, June 8, was disable 802.11b rates with: >> >> uci set wireless.radio0.legacy_rates=‘0' >> uci commit >> reboot >> >> So at least the minimum rate should be limited to 6Mbit. We’ll see if this >> helps in the next few days as the camp fills, and if I hear any complaints: >> >> https://www.drhleny.cz/smokeping/ >> > > -- > If you need an encryption key / ha kell titkosító kulcs: > http://bkil.blogspot.hu/2014/08/public-key.html > > > On Fri, Jun 8, 2018 at 11:37 AM, Pete Heist <pete@eventide.io> wrote: >> >> On May 31, 2018, at 2:52 AM, David Lang <david@lang.hm> wrote: >> >> On Sat, 19 May 2018, bkil wrote: >> >> In reply to this thread: >> https://lists.bufferbloat.net/pipermail/make-wifi-fast/2018-April/001787.html >> >> Sorry for the late response, although I can see from yesterday's >> SmokePing plots that the issue still prevails. >> >> 1. >> You should definitely not allow rates as low as 1Mb/s considering: >> * plots of signal vs. rate, >> * topology of closely packed cabins; >> * mostly static, noise-free camp ground. >> >> Almost all of your clients were able to link with >20Mb/s even at >> 70-80dBm. Those below were probably just idling. I'd limit the network >> to 802.11g/n-only, and would even consider disabling all rates below >> 12Mb/s. >> >> >> I have been wanting to do this on my APs for Scale (wndr3800 APs with ath9k >> chipsets) and have been unable to find how to do this on these chipsets. >> >> >> I also didn’t manage to disable specific MCS rates: >> - supported_rates and basic_rate in /etc/config/wireless seem to only affect >> legacy rates, not ht-mcs-2.4 rates. >> - "iw dev wlan0 set bitrates” only sets the transmit rate for the AP >> >> But what I did do at 10:45am today, June 8, was disable 802.11b rates with: >> >> uci set wireless.radio0.legacy_rates=‘0' >> uci commit >> reboot >> >> So at least the minimum rate should be limited to 6Mbit. We’ll see if this >> helps in the next few days as the camp fills, and if I hear any complaints: >> >> https://www.drhleny.cz/smokeping/ >> ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-13 13:07 ` Pete Heist @ 2018-06-13 13:24 ` Toke Høiland-Jørgensen 2018-06-13 16:01 ` Pete Heist 2018-06-13 16:30 ` Sebastian Moeller 0 siblings, 2 replies; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-06-13 13:24 UTC (permalink / raw) To: Pete Heist, bkil; +Cc: make-wifi-fast Pete Heist <pete@eventide.io> writes: > Trying one thing at a time, as legacy_rates=‘1’ didn’t do anything > noticeable, which is not surprising when we probably don’t have many > ‘b' devices connecting. We hit 4.9s ping times this morning, good > work. :) > > Even though isolation couldn’t be turned on in the admin interface > (because it says it can’t do it when bridging to a VLAN, for some > reason), I was able to enable isolation for both SSIDs and it still > seems to work, so that’s the change as of now: > > wireless.ap0_1.isolate='1' > wireless.ap0_2.isolate=‘1’ > > Before I could ping between clients, and now I can’t, so apparently > isolation is doing what it should. The next test will be tonight. > > I’m still waiting for the digging / cabling project to happen, which > is what I expect to bring the biggest benefit. Again, that will add a > new cabled AP which will serve as a gateway for cabin 12 and split > cabins 12 and 20. This should also improve the signal to 12 vastly, > which is one of the most loaded APs and currently only has RSSI -71 / > MCS 5 or 6 to its parent AP, now that the leaves are full on the > trees. How to improve WiFi? Run cables! :D -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-13 13:24 ` Toke Høiland-Jørgensen @ 2018-06-13 16:01 ` Pete Heist 2018-06-30 19:14 ` bkil 2018-06-13 16:30 ` Sebastian Moeller 1 sibling, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-06-13 16:01 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: make-wifi-fast [-- Attachment #1: Type: text/plain, Size: 1196 bytes --] > On Jun 13, 2018, at 3:24 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Pete Heist <pete@eventide.io <mailto:pete@eventide.io>> writes: > >> I’m still waiting for the digging / cabling project to happen, which > > How to improve WiFi? Run cables! :D Yes, and pave the earth right over… :) Seriously, watching this network evolve over the years has been a 6 year lesson in incremental progress, something like: 1) 4Mbit ADSL + 802.11g 2) 4Mbit ADSL/fq_codel + 802.11g 3) 4Mbit ADSL/fq_codel + 802.11n 4) 40Mbit via P2P WiFi + 802.11n 5) 40Mbit via P2P WiFi + 802.11n 2x2 6) 40Mbit via P2P WiFi + 802.11n 2x2 / make-wifi-fast ath9k driver This is not an enterprise, so we’ve never invested in high-end hardware, but perhaps life would have been better along the way if the entire WiFi industry had focused not on maximum single device throughput but on resolving contention, increasing responsiveness and providing fairness between multiple users. But then we can’t put “300 Mbps” on the box! I do appreciate your ath9k driver work for moving beyond that kind of thinking, I just need to see if I can fix the other stuff getting in its way… :) [-- Attachment #2: Type: text/html, Size: 4704 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-13 16:01 ` Pete Heist @ 2018-06-30 19:14 ` bkil 2018-07-04 21:47 ` Pete Heist 2018-07-09 23:33 ` Pete Heist 0 siblings, 2 replies; 56+ messages in thread From: bkil @ 2018-06-30 19:14 UTC (permalink / raw) To: Pete Heist; +Cc: Toke Høiland-Jørgensen, make-wifi-fast Dear Pete, We understand that you are reluctant to share full radiotap wlan traces due to privacy reasons and collecting them would strain your infrastructure still a bit more. As a compromise, let me suggest a lightweight alternative. You can set up to only collect trimmed frame metadata on your AP's of interest, ie., cabin 20 & 24 and the gateway running on the same channel. You can aggregate a full day's worth of data or just the interesting hours at a central location using a PC or NAS. Then you can export only a few key data fields to CSV to carefully mask out any remaining ID and data parts. After compressing such an export with xz, it should have a manageable size. I've made a few small scripts to do such an export and to illustrate how to run the central collection & the agents on your AP's in a quick and dirty way. Feel free to improve and share your findings. https://github.com/bkil/lede-pcap-investigation I'm sure many of us would like to have a look into such real life data, because your high density single radio setup sounds really interesting from a contention standpoint. Although I've tested it on both Atheros and Intel traces, you should probably preserve the original pcap's as well until we verify that all needed fields had been successfully exported. Also feel free to adjust the snap length if needed. After you have collected the data, please also attach a recent SmokePing screenshot so we can correlate the two. Although we can probably give more informed advice based on the traces, as a stop gap measure until you finish cabling, you may consider traffic shaping of the clients to improve QoS. For example, you may put a hard bandwidth cap on each client (or only those coming from an AP in question), prioritize HTTP & VoIP traffic and reduce P2P traffic, depending on what is the biggest data hog. Also, could you by any chance set up monitoring of some addition metrics, like CPU usage, I/O wait, load average and memory usage on your nodes? N.b.: It's a pity that networking trace anonymization tools aren't up to the challenge. Simple MAC randomization or hashing with data omission would be just fine for such a use case. https://sharkfestus.wireshark.org/sharkfest.11/presentations/A-11_Bongertz-Trace_File_Anonymization.pdf http://www.caida.org/tools/taxonomy/anontaxonomy.xml https://wiki.wireshark.org/Tools#Capture_file_anonymization https://cseweb.ucsd.edu/~snoeren/papers/slomo-nsdi13.pdf Regards On Wed, Jun 13, 2018 at 6:01 PM, Pete Heist <pete@heistp.net> wrote: > > On Jun 13, 2018, at 3:24 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Pete Heist <pete@eventide.io> writes: > > I’m still waiting for the digging / cabling project to happen, which > > > How to improve WiFi? Run cables! :D > > > Yes, and pave the earth right over… :) > > Seriously, watching this network evolve over the years has been a 6 year > lesson in incremental progress, something like: > > 1) 4Mbit ADSL + 802.11g > 2) 4Mbit ADSL/fq_codel + 802.11g > 3) 4Mbit ADSL/fq_codel + 802.11n > 4) 40Mbit via P2P WiFi + 802.11n > 5) 40Mbit via P2P WiFi + 802.11n 2x2 > 6) 40Mbit via P2P WiFi + 802.11n 2x2 / make-wifi-fast ath9k driver > > This is not an enterprise, so we’ve never invested in high-end hardware, but > perhaps life would have been better along the way if the entire WiFi > industry had focused not on maximum single device throughput but on > resolving contention, increasing responsiveness and providing fairness > between multiple users. But then we can’t put “300 Mbps” on the box! > > I do appreciate your ath9k driver work for moving beyond that kind of > thinking, I just need to see if I can fix the other stuff getting in its > way… :) > > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-30 19:14 ` bkil @ 2018-07-04 21:47 ` Pete Heist 2018-07-05 13:08 ` Toke Høiland-Jørgensen 2018-07-09 5:13 ` David Lang 2018-07-09 23:33 ` Pete Heist 1 sibling, 2 replies; 56+ messages in thread From: Pete Heist @ 2018-07-04 21:47 UTC (permalink / raw) To: bkil; +Cc: Make-Wifi-fast > On Jun 30, 2018, at 9:14 PM, bkil <bkil.hu+Aq@gmail.com> wrote: > > Dear Pete, > > We understand that you are reluctant to share full radiotap wlan > traces due to privacy reasons and collecting them would strain your > infrastructure still a bit more. Thanks for this idea (creative scripts as well!) but I think even the appearance of sharing the packet traces of guests might cause a problem, so I probably won’t be able to. > Although we can probably give more informed advice based on the > traces, as a stop gap measure until you finish cabling, you may > consider traffic shaping of the clients to improve QoS. For example, > you may put a hard bandwidth cap on each client (or only those coming > from an AP in question), prioritize HTTP & VoIP traffic and reduce P2P > traffic, depending on what is the biggest data hog. I've been playing with that recently actually. I wasn’t able to leave it restricted for long before, but just now I set the per-station limit to 6mbit down / 2mbit up. Some people may not be pleased :), but let’s see what it does for a few days. One thing I’m surprised by is the amount of data going through the VO queue (view in fixed width font for sanity): root@Cabin_28:/sys/kernel/debug/ieee80211/phy0/ath9k# cat xmit BE BK VI VO MPDUs Queued: 218711 5303 435 12846236 MPDUs Completed: 18355992 37807 1672 89811026 MPDUs XRetried: 68553 2409 258 2393713 Aggregates: 28796699 262349 2232 0 AMPDUs Queued HW: 0 0 0 0 AMPDUs Completed: 295460430 2306820 317102 0 AMPDUs Retried: 11666516 120541 3006 0 AMPDUs XRetried: 394366 7493 500 0 TXERR Filtered: 198118 1152 41 1202 FIFO Underrun: 86 0 0 2237 TXOP Exceeded: 0 0 0 0 TXTIMER Expiry: 0 0 0 0 DESC CFG Error: 407 0 0 74 DATA Underrun: 4 0 0 1 DELIM Underrun: 617 0 0 20 TX-Pkts-All: 314279341 2354529 319532 92204739 TX-Bytes-All: 643508167 2628424750 1190327282061782753 HW-put-tx-buf: 94881437 1449351 315153 86013757 HW-tx-start: 0 0 0 0 HW-tx-proc-desc: 96148496 1449537 314472 92007898 TX-Failed: 0 0 0 0 And in the aqm driver support I’m not sure what fq_overmemory signifies (Toke may know that) or if the other stats are within the expected ranges for this amount of traffic. root@Cabin_28:/sys/kernel/debug/ieee80211/phy0# cat aqm access name value R fq_flows_cnt 4096 R fq_backlog 0 R fq_overlimit 1716333 R fq_overmemory 58786914 R fq_collisions 4100715 R fq_memory_usage 0 RW fq_memory_limit 4194304 RW fq_limit 8192 > Also, could you by any chance set up monitoring of some addition > metrics, like CPU usage, I/O wait, load average and memory usage on > your nodes? Here’s a snapshot of a gateway AP that’s under typical contentious load in the evening: Mem: 26908K used, 33628K free, 228K shrd, 356K buff, 4368K cached CPU: 8% usr 19% sys 0% nic 42% idle 0% io 0% irq 29% sirq Load average: 0.32 0.20 0.22 1/62 12265 Servicing software interrupts is always a larger portion of the time, which might be expected. > N.b.: It's a pity that networking trace anonymization tools aren't up > to the challenge. Simple MAC randomization or hashing with data > omission would be just fine for such a use case. I’m also surprised I don’t see an obvious tool to randomize MACs. In the case of releasing captures of guest traffic without asking their permission though, I’m not sure any technical measures would be enough to erase the perception problem, but pseudonymization of all possible identifying values would theoretically satisfy GDPR requirements, for example. After that, it would be extremely difficult (maybe not impossible) without extensive external knowledge to identify users from their traffic. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-04 21:47 ` Pete Heist @ 2018-07-05 13:08 ` Toke Høiland-Jørgensen 2018-07-05 17:26 ` Pete Heist 2018-07-09 5:13 ` David Lang 1 sibling, 1 reply; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-07-05 13:08 UTC (permalink / raw) To: Pete Heist, bkil; +Cc: Make-Wifi-fast Pete Heist <pete@heistp.net> writes: > And in the aqm driver support I’m not sure what fq_overmemory > signifies (Toke may know that) or if the other stats are within the > expected ranges for this amount of traffic. Overmemory is the amount of times the total queue size went over the 4MiB limit. So I'd say that 60 million times is...quite a lot. Is there a lot of traffic that doesn't respond to congestion signals on that link? -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-05 13:08 ` Toke Høiland-Jørgensen @ 2018-07-05 17:26 ` Pete Heist 2018-07-05 17:37 ` Toke Høiland-Jørgensen 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-07-05 17:26 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bkil, Make-Wifi-fast > On Jul 5, 2018, at 3:08 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Pete Heist <pete@heistp.net> writes: > >> And in the aqm driver support I’m not sure what fq_overmemory >> signifies (Toke may know that) or if the other stats are within the >> expected ranges for this amount of traffic. > > Overmemory is the amount of times the total queue size went over the > 4MiB limit. So I'd say that 60 million times is...quite a lot. Is there > a lot of traffic that doesn't respond to congestion signals on that > link? Wow, ok. Well, I suspect it’s p2p. I watched some tcpdumps this morning of traffic with non-zero DSCP values and saw quite a lot of UDP packets with these higher DSCP values, which may be aggressive and not respond much to drops. It’s not always easy to classify that traffic to do something about it, but I could try. About 20% of overall traffic is UDP. In response to seeing more traffic in the VO queues, this morning at around 10:30a I actually disabled the rate limiting (so I’m only testing one change at a time) and started zero-ing out DSCP values on all our access points, as well as our main router, with this: "iptables -t mangle -I PREROUTING -j DSCP --set-dscp 0” I’m not holding my breath until I’ve seen a couple nights of data, but ping times have been lower since I made that change (https://www.drhleny.cz/smokeping/, and note that you may need to empty your cache if you’ve viewed this page before, so make sure you’re seeing the latest data). This may not be a long-term solution, but it’s a test. I know doesn’t prevent people from sending packets to the AP with non-zero DSCP values, so they can still use the VO or other queues going to the APs (I can’t zero those values until the packets are out of the driver as far as I know), but at least traffic coming back from the Internet uses BE when it goes back to devices. I’ve seen in my point-to-point testing how lots of traffic in VO can destroy aggregate throughput. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-05 17:26 ` Pete Heist @ 2018-07-05 17:37 ` Toke Høiland-Jørgensen 2018-07-05 18:02 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-07-05 17:37 UTC (permalink / raw) To: Pete Heist; +Cc: bkil, Make-Wifi-fast Pete Heist <pete@heistp.net> writes: >> On Jul 5, 2018, at 3:08 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >> >> Pete Heist <pete@heistp.net> writes: >> >>> And in the aqm driver support I’m not sure what fq_overmemory >>> signifies (Toke may know that) or if the other stats are within the >>> expected ranges for this amount of traffic. >> >> Overmemory is the amount of times the total queue size went over the >> 4MiB limit. So I'd say that 60 million times is...quite a lot. Is there >> a lot of traffic that doesn't respond to congestion signals on that >> link? > > Wow, ok. Well, I suspect it’s p2p. Wouldn't be surprised. Note that packets will be dropped from the longest queue, though, so unresponsive flows just hurt themselves... -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-05 17:37 ` Toke Høiland-Jørgensen @ 2018-07-05 18:02 ` Pete Heist 2018-07-05 20:17 ` Jonathan Morton 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-07-05 18:02 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: bkil, Make-Wifi-fast > On Jul 5, 2018, at 7:37 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Pete Heist <pete@heistp.net> writes: > >>> On Jul 5, 2018, at 3:08 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >>> >>> Pete Heist <pete@heistp.net> writes: >>> >>>> And in the aqm driver support I’m not sure what fq_overmemory >>>> signifies (Toke may know that) or if the other stats are within the >>>> expected ranges for this amount of traffic. >>> >>> Overmemory is the amount of times the total queue size went over the >>> 4MiB limit. So I'd say that 60 million times is...quite a lot. Is there >>> a lot of traffic that doesn't respond to congestion signals on that >>> link? >> >> Wow, ok. Well, I suspect it’s p2p. > > Wouldn't be surprised. Note that packets will be dropped from the > longest queue, though, so unresponsive flows just hurt themselves... Probably not before they’ve wasted airtime though and and impacted everyone else... ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-05 18:02 ` Pete Heist @ 2018-07-05 20:17 ` Jonathan Morton 2018-07-09 2:20 ` Aaron Wood 0 siblings, 1 reply; 56+ messages in thread From: Jonathan Morton @ 2018-07-05 20:17 UTC (permalink / raw) To: Pete Heist; +Cc: Toke Høiland-Jørgensen, Make-Wifi-fast > On 5 Jul, 2018, at 9:02 pm, Pete Heist <pete@heistp.net> wrote: > >> Wouldn't be surprised. Note that packets will be dropped from the >> longest queue, though, so unresponsive flows just hurt themselves... > > Probably not before they’ve wasted airtime though and and impacted everyone else... Would it be worth extending the principle of airtime fairness to the QoS queues? Clearly traffic in the VO queue consumes airtime out of all proportion to its actual volume, due to the prohibition on aggregation; this provokes a similar argument to the impact of slow clients on faster ones. I wouldn't worry too much about the links between leaf APs and their clients. Those are probably relatively strong and fast, so BE traffic can get through reasonably well in between the VO traffic. But the AP-to-AP links cover a significant distance and are that much more susceptible to airtime congestion, which VO traffic exacerbates considerably. These APs are also running open-source firmware where we can actually tackle this problem. So there must be a case for deprioritising VO if it's using more than some reasonable share of the available airtime. Oh, and if I find out which BT client has selected a VO-category DSCP by default... >:-( - Jonathan Morton ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-05 20:17 ` Jonathan Morton @ 2018-07-09 2:20 ` Aaron Wood 2018-07-09 5:17 ` Jonathan Morton 0 siblings, 1 reply; 56+ messages in thread From: Aaron Wood @ 2018-07-09 2:20 UTC (permalink / raw) To: Jonathan Morton; +Cc: Pete Heist, Make-Wifi-fast [-- Attachment #1: Type: text/plain, Size: 1801 bytes --] Do the AP-to-AP links use the same packet scheduling as the AP-to-STA links? (especially the prohibition on aggregation for VO, which seems counter-productive on a backhaul link). On Thu, Jul 5, 2018 at 1:17 PM, Jonathan Morton <chromatix99@gmail.com> wrote: > > On 5 Jul, 2018, at 9:02 pm, Pete Heist <pete@heistp.net> wrote: > > > >> Wouldn't be surprised. Note that packets will be dropped from the > >> longest queue, though, so unresponsive flows just hurt themselves... > > > > Probably not before they’ve wasted airtime though and and impacted > everyone else... > > Would it be worth extending the principle of airtime fairness to the QoS > queues? Clearly traffic in the VO queue consumes airtime out of all > proportion to its actual volume, due to the prohibition on aggregation; > this provokes a similar argument to the impact of slow clients on faster > ones. > > I wouldn't worry too much about the links between leaf APs and their > clients. Those are probably relatively strong and fast, so BE traffic can > get through reasonably well in between the VO traffic. > > But the AP-to-AP links cover a significant distance and are that much more > susceptible to airtime congestion, which VO traffic exacerbates > considerably. These APs are also running open-source firmware where we can > actually tackle this problem. So there must be a case for deprioritising > VO if it's using more than some reasonable share of the available airtime. > > Oh, and if I find out which BT client has selected a VO-category DSCP by > default... >:-( > > - Jonathan Morton > > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast > [-- Attachment #2: Type: text/html, Size: 2484 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-09 2:20 ` Aaron Wood @ 2018-07-09 5:17 ` Jonathan Morton 2018-07-09 6:27 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Jonathan Morton @ 2018-07-09 5:17 UTC (permalink / raw) To: Aaron Wood; +Cc: Pete Heist, Make-Wifi-fast > On 9 Jul, 2018, at 5:20 am, Aaron Wood <woody77@gmail.com> wrote: > > Do the AP-to-AP links use the same packet scheduling as the AP-to-STA links? (especially the prohibition on aggregation for VO, which seems counter-productive on a backhaul link). I believe they do, since they go through the same type of MAC grant process. To avoid that, the AP would need to ignore the traffic class when assigning backhaul packets to queues. - Jonathan Morton ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-09 5:17 ` Jonathan Morton @ 2018-07-09 6:27 ` Pete Heist 2018-07-09 12:55 ` Sebastian Moeller 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-07-09 6:27 UTC (permalink / raw) To: Jonathan Morton; +Cc: Aaron Wood, Make-Wifi-fast [-- Attachment #1: Type: text/plain, Size: 1425 bytes --] > On Jul 9, 2018, at 7:17 AM, Jonathan Morton <chromatix99@gmail.com> wrote: > >> On 9 Jul, 2018, at 5:20 am, Aaron Wood <woody77@gmail.com> wrote: >> >> Do the AP-to-AP links use the same packet scheduling as the AP-to-STA links? (especially the prohibition on aggregation for VO, which seems counter-productive on a backhaul link). > > I believe they do, since they go through the same type of MAC grant process. To avoid that, the AP would need to ignore the traffic class when assigning backhaul packets to queues. I’m also almost sure of that. However, since I set all DSCP values to 0 in PREROUTING, most traffic going over the backhauls is now best effort, and I haven’t been able to detect a change in ping times to the APs. The change was made July 5 around 10am and, for example, this is probably our most challenged AP: https://www.drhleny.cz/smokeping/Cabin12.html <https://www.drhleny.cz/smokeping/Cabin12.html> I still think making backhaul traffic best effort makes sense. In my point-to-point tests, I sometimes do this using IPIP tunnels, so the DSCP value of the inner packet is hidden from the WiFi stack, but the value is still maintained after it passes through the tunnel. A drawback of this is that the MTU is shortened by 20 bytes. Tomorrow I’ll be setting up a 5GHz point-to-point backhaul link for Cabin 28 (higher standard cabins), so I expect a significant change there. [-- Attachment #2: Type: text/html, Size: 2146 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-09 6:27 ` Pete Heist @ 2018-07-09 12:55 ` Sebastian Moeller 2018-07-09 23:21 ` Pete Heist 0 siblings, 1 reply; 56+ messages in thread From: Sebastian Moeller @ 2018-07-09 12:55 UTC (permalink / raw) To: Pete Heist; +Cc: Jonathan Morton, Make-Wifi-fast Hi Pete, being ignorant on how one sets up a p2p bwifi backbone, I wonder wherher qos-maps might be useful to effectively disable WMMon a link without remapping the dscp field or tunneling the enduser IP psackets? See https://patchwork.kernel.org/patch/3212651/ and hostapd.conf: # QoS Map Set configuration # # Comma delimited QoS Map Set in decimal values # (see IEEE Std 802.11-2012, 8.4.2.97) # # format: # [<DSCP Exceptions[DSCP,UP]>,]<UP 0 range[low,high]>,...<UP 7 range[low,high]> # # There can be up to 21 optional DSCP Exceptions which are pairs of DSCP Value # (0..63 or 255) and User Priority (0..7). This is followed by eight DSCP Range # descriptions with DSCP Low Value and DSCP High Value pairs (0..63 or 255) for # each UP starting from 0. If both low and high value are set to 255, the # corresponding UP is not used. # # default: not set #qos_map_set=53,2,22,6,8,15,0,7,255,255,16,31,32,39,255,255,40,47,255,255 You could simply use qos_map_set=0,63,255,255,255,255,255,255,255,255,255,255,255,255,255,255 to map all dscps to UP0... I have not played with this feature though, so it might not work at all for your purpose. Best Regards Sebastian > On Jul 9, 2018, at 08:27, Pete Heist <pete@heistp.net> wrote: > > >> On Jul 9, 2018, at 7:17 AM, Jonathan Morton <chromatix99@gmail.com> wrote: >> >>> On 9 Jul, 2018, at 5:20 am, Aaron Wood <woody77@gmail.com> wrote: >>> >>> Do the AP-to-AP links use the same packet scheduling as the AP-to-STA links? (especially the prohibition on aggregation for VO, which seems counter-productive on a backhaul link). >> >> I believe they do, since they go through the same type of MAC grant process. To avoid that, the AP would need to ignore the traffic class when assigning backhaul packets to queues. > > I’m also almost sure of that. However, since I set all DSCP values to 0 in PREROUTING, most traffic going over the backhauls is now best effort, and I haven’t been able to detect a change in ping times to the APs. The change was made July 5 around 10am and, for example, this is probably our most challenged AP: https://www.drhleny.cz/smokeping/Cabin12.html > > I still think making backhaul traffic best effort makes sense. In my point-to-point tests, I sometimes do this using IPIP tunnels, so the DSCP value of the inner packet is hidden from the WiFi stack, but the value is still maintained after it passes through the tunnel. A drawback of this is that the MTU is shortened by 20 bytes. > > Tomorrow I’ll be setting up a 5GHz point-to-point backhaul link for Cabin 28 (higher standard cabins), so I expect a significant change there. > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-09 12:55 ` Sebastian Moeller @ 2018-07-09 23:21 ` Pete Heist 0 siblings, 0 replies; 56+ messages in thread From: Pete Heist @ 2018-07-09 23:21 UTC (permalink / raw) To: Sebastian Moeller; +Cc: Make-Wifi-fast > On Jul 9, 2018, at 2:55 PM, Sebastian Moeller <moeller0@gmx.de> wrote: > > You could simply use > qos_map_set=0,63,255,255,255,255,255,255,255,255,255,255,255,255,255,255 > to map all dscps to UP0... Thanks, if it works, this should be better than tunneling. :) One thing with using Open Mesh firmware though is that it isn’t always possible to customize the OpenWRT config that easily. Their web-based dashboard can overwrite config files at any time, and to customize it in the official way you have to write a script and submit it for approval, due to "FCC rules” that end users shouldn’t be able to modify "certain WiFi settings". So far I’ve avoided that process by re-applying temporary changes periodically, which isn’t always that robust. It mean mean moving to a custom OpenWRT config one day, if I have the time... Pete ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-04 21:47 ` Pete Heist 2018-07-05 13:08 ` Toke Høiland-Jørgensen @ 2018-07-09 5:13 ` David Lang 1 sibling, 0 replies; 56+ messages in thread From: David Lang @ 2018-07-09 5:13 UTC (permalink / raw) To: Pete Heist; +Cc: bkil, Make-Wifi-fast [-- Attachment #1: Type: text/plain, Size: 1034 bytes --] On Wed, 4 Jul 2018, Pete Heist wrote: > >> N.b.: It's a pity that networking trace anonymization tools aren't up >> to the challenge. Simple MAC randomization or hashing with data >> omission would be just fine for such a use case. > > I’m also surprised I don’t see an obvious tool to randomize MACs. In the case of releasing captures of guest traffic without asking their permission though, I’m not sure any technical measures would be enough to erase the perception problem, but pseudonymization of all possible identifying values would theoretically satisfy GDPR requirements, for example. After that, it would be extremely difficult (maybe not impossible) without extensive external knowledge to identify users from their traffic. When I look at the data from SCaLE, I find that if I truncate the MAC addresses by one byte, there are still very few collisions. In your much more limited situation, I'll bet that you can get away with dropping everything except the last couple of bytes and have the traces be usable. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-30 19:14 ` bkil 2018-07-04 21:47 ` Pete Heist @ 2018-07-09 23:33 ` Pete Heist 2018-07-10 0:39 ` Pete Heist 1 sibling, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-07-09 23:33 UTC (permalink / raw) To: bkil; +Cc: Make-Wifi-fast > On Jun 30, 2018, at 9:14 PM, bkil <bkil.hu+Aq@gmail.com> wrote: > > N.b.: It's a pity that networking trace anonymization tools aren't up > to the challenge. Simple MAC randomization or hashing with data > omission would be just fine for such a use case. I set out to write a “simple” pcap anonymizer today in Go and it went smoothly with Ethernet pcaps containing IP data, but if one wants to cover radiotap + 802.11 plus all other protocols where MACs can appear it's not straightforward. Radiotap is easy to skip, but then for starters MACs appear in 802.11, BATMAN (for mesh nets), EAPOL, DHCP, TDLS, Ethernet and ARP, plus there are LLC headers to skip over. Each of these has various rules for how it expands and contracts based on certain flags. I handled 802.11 well enough with some rules on the frame control field, but when it comes to data frames there’s probably too much to handle for a simple “write it in a day” kind of tool. I did try your scripts and tcpdump + netcat works and seems like a viable technique, though the dumps get large quickly. I’ll still consider if releasing the limited data would be possible, and I appreciate all of your analysis! Pete ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-09 23:33 ` Pete Heist @ 2018-07-10 0:39 ` Pete Heist 2018-07-10 7:02 ` bkil 0 siblings, 1 reply; 56+ messages in thread From: Pete Heist @ 2018-07-10 0:39 UTC (permalink / raw) To: bkil; +Cc: Make-Wifi-fast > On Jul 10, 2018, at 1:33 AM, Pete Heist <pete@heistp.net> wrote: > >> On Jun 30, 2018, at 9:14 PM, bkil <bkil.hu+Aq@gmail.com> wrote: >> >> N.b.: It's a pity that networking trace anonymization tools aren't up >> to the challenge. Simple MAC randomization or hashing with data >> omission would be just fine for such a use case. > > I set out to write a “simple” pcap anonymizer today in Go and it went smoothly with Ethernet pcaps containing IP data, but if one wants to cover radiotap + 802.11 plus all other protocols where MACs can appear it's not straightforward. So perhaps I can still find a snaplen that covers radiotap + 802.11 but not any of the data, or I can randomize any leftover data beyond the 802.11 header. I’ll make one more attempt tomorrow. If it works, it might be easier for analysis than the csv. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-07-10 0:39 ` Pete Heist @ 2018-07-10 7:02 ` bkil 0 siblings, 0 replies; 56+ messages in thread From: bkil @ 2018-07-10 7:02 UTC (permalink / raw) To: Pete Heist; +Cc: Make-Wifi-fast The CSV can be shaved down a lot by skipping some of the less interesting variable length fields that grow a lot, like the ones related to TIM/BA. I aimed to enumerate any and all fields that we could find any use in, not those fields that are the most interesting. You can experiment with CSVkit or the simple `cut` to see the effect of dropping certain columns. You can also vary snaplen to see the effect on an already made capture by using `editcap` from wireshark-common. It is perfectly legitime to cut short any or all frames shorter than their actual lengths. Tools should handle this, so no need to fill remaining parts with random bytes. Now that you've made a nice pcap anonymizer that is easy to extend, it would be indeed more desirable to share pcap's, especially from a tooling perspective. I've also considered this possibility and snaplength might be workable at a basic level, but you have to go really low. On the card and tcpdump version I tested, the common minimal radiotap size was 30 bytes, hence 58 bytes of snaplength ensured that 0 bytes were left from data frames. You can check this by progressively reducing snaplength and watching data bytes disappear: `tshark -x -V -r dump.pcap` Unfortunately, as radiotap itself is variable length, in case of transmissions with various properties (like HT vs. non-HT), it can get bigger, thus snapping into some interesting fields afterwards. Even with short headers, this snaps in half some mildly interesting frames like beacons. It would be best if we had a smart tool to: * randomize MAC (easiest is hashing with salt or encryption), * only keep radiotap & wlan headers without data, * prune a few sensitive parts from wlan as well (authentication, most beacon IEs, etc.) I've also been following the thread and waiting for anything to pop up because you've been experimenting with some very nice things. I couldn't say for sure, but the IO stat you've shared seems pretty high, especially if it was only a representative value and not a peak. Soft-realtime systems are best left idling - reaching 58% utilization seems like the network could pause pretty often if anything extra comes in. Together with the underflows, I would say that at least part of your problem may not be airtime, but rather CPU time on one or more of your APs/routers when handling peaks, though again both monitoring and airtime investigation could give more insight. CPU time recording is easy (or even forwarding with netcat similar to the pcap examples) https://github.com/bkil/lede-pcap-investigation/blob/master/wrt-cpu-mon.sh Recording key fields inside aqm/xmit could also prove to be useful to correlate with the SmokePing, also a matter of a bit of grep'ping. Although, if this would be the case, shaping should have provided more benefits, but we'll have to think about it. On Tue, Jul 10, 2018 at 2:39 AM, Pete Heist <pete@heistp.net> wrote: > >> On Jul 10, 2018, at 1:33 AM, Pete Heist <pete@heistp.net> wrote: >> >>> On Jun 30, 2018, at 9:14 PM, bkil <bkil.hu+Aq@gmail.com> wrote: >>> >>> N.b.: It's a pity that networking trace anonymization tools aren't up >>> to the challenge. Simple MAC randomization or hashing with data >>> omission would be just fine for such a use case. >> >> I set out to write a “simple” pcap anonymizer today in Go and it went smoothly with Ethernet pcaps containing IP data, but if one wants to cover radiotap + 802.11 plus all other protocols where MACs can appear it's not straightforward. > > So perhaps I can still find a snaplen that covers radiotap + 802.11 but not any of the data, or I can randomize any leftover data beyond the 802.11 header. I’ll make one more attempt tomorrow. If it works, it might be easier for analysis than the csv. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-13 13:24 ` Toke Høiland-Jørgensen 2018-06-13 16:01 ` Pete Heist @ 2018-06-13 16:30 ` Sebastian Moeller 2018-06-13 17:50 ` Toke Høiland-Jørgensen 1 sibling, 1 reply; 56+ messages in thread From: Sebastian Moeller @ 2018-06-13 16:30 UTC (permalink / raw) To: Toke Høiland-Jørgensen; +Cc: Pete Heist, bkil, make-wifi-fast Ho Toke, > On Jun 13, 2018, at 15:24, Toke Høiland-Jørgensen <toke@toke.dk> wrote: > > Pete Heist <pete@eventide.io> writes: > >> Trying one thing at a time, as legacy_rates=‘1’ didn’t do anything >> noticeable, which is not surprising when we probably don’t have many >> ‘b' devices connecting. We hit 4.9s ping times this morning, good >> work. :) >> >> Even though isolation couldn’t be turned on in the admin interface >> (because it says it can’t do it when bridging to a VLAN, for some >> reason), I was able to enable isolation for both SSIDs and it still >> seems to work, so that’s the change as of now: >> >> wireless.ap0_1.isolate='1' >> wireless.ap0_2.isolate=‘1’ >> >> Before I could ping between clients, and now I can’t, so apparently >> isolation is doing what it should. The next test will be tonight. >> >> I’m still waiting for the digging / cabling project to happen, which >> is what I expect to bring the biggest benefit. Again, that will add a >> new cabled AP which will serve as a gateway for cabin 12 and split >> cabins 12 and 20. This should also improve the signal to 12 vastly, >> which is one of the most loaded APs and currently only has RSSI -71 / >> MCS 5 or 6 to its parent AP, now that the leaves are full on the >> trees. > > How to improve WiFi? Run cables! :D Just consider this to be a cheap* way to create a favorable RF environment for the signals ;) Best Regards Sebastian *) arguable cheaper than cleaning the fesssnel zoe of the wifi link and putting a faraday cage around it > > -Toke > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-13 16:30 ` Sebastian Moeller @ 2018-06-13 17:50 ` Toke Høiland-Jørgensen 0 siblings, 0 replies; 56+ messages in thread From: Toke Høiland-Jørgensen @ 2018-06-13 17:50 UTC (permalink / raw) To: Sebastian Moeller; +Cc: Pete Heist, bkil, make-wifi-fast Sebastian Moeller <moeller0@gmx.de> writes: > Ho Toke, > >> On Jun 13, 2018, at 15:24, Toke Høiland-Jørgensen <toke@toke.dk> wrote: >> >> Pete Heist <pete@eventide.io> writes: >> >>> Trying one thing at a time, as legacy_rates=‘1’ didn’t do anything >>> noticeable, which is not surprising when we probably don’t have many >>> ‘b' devices connecting. We hit 4.9s ping times this morning, good >>> work. :) >>> >>> Even though isolation couldn’t be turned on in the admin interface >>> (because it says it can’t do it when bridging to a VLAN, for some >>> reason), I was able to enable isolation for both SSIDs and it still >>> seems to work, so that’s the change as of now: >>> >>> wireless.ap0_1.isolate='1' >>> wireless.ap0_2.isolate=‘1’ >>> >>> Before I could ping between clients, and now I can’t, so apparently >>> isolation is doing what it should. The next test will be tonight. >>> >>> I’m still waiting for the digging / cabling project to happen, which >>> is what I expect to bring the biggest benefit. Again, that will add a >>> new cabled AP which will serve as a gateway for cabin 12 and split >>> cabins 12 and 20. This should also improve the signal to 12 vastly, >>> which is one of the most loaded APs and currently only has RSSI -71 / >>> MCS 5 or 6 to its parent AP, now that the leaves are full on the >>> trees. >> >> How to improve WiFi? Run cables! :D > > Just consider this to be a cheap* way to create a favorable RF > environment for the signals ;) Oh, I am well aware of this: https://blog.tohojo.dk/2017/11/building-a-wireless-testbed-with-wires.html -Toke ^ permalink raw reply [flat|nested] 56+ messages in thread
[parent not found: <CADuVhRWL2aVjzjfLHg1nPFa8Ae-hWrGrE7Wga4eUKon3oqoTXA@mail.gmail.com>]
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes [not found] ` <CADuVhRWL2aVjzjfLHg1nPFa8Ae-hWrGrE7Wga4eUKon3oqoTXA@mail.gmail.com> @ 2018-06-30 19:26 ` bkil 2018-06-30 20:04 ` Jannie Hanekom 0 siblings, 1 reply; 56+ messages in thread From: bkil @ 2018-06-30 19:26 UTC (permalink / raw) To: Jannie Hanekom, make-wifi-fast Dear Jannie, Thanks for the words of caution. I've just now noticed that you've only sent your reply to me, but let me forward it to the list as well. I just made that dtim_period example up. I don't have a hard recommendation about it other than the more you increase it the more power you save (up to a point when your devices loose sync) and at the same time the more broadcast latency you add, so producing a dtim at least once a second is probably a good idea. That isn't far from the recommendation you suggest. Indeed there can exist defective devices sensitive to beacon_int, though I wouldn't worry too much for such a small increment. For example, on a passive scanning device, I've noticed that the time of discovery increased from 1-2 second to up to 10, but all was fine otherwise. mcast_rate is consumed by wpa_supplicant and only applies to IBSS and 11s configurations: https://github.com/lede-project/source/blob/96f4792fdb036ecf5c8417fce6503412b0b27e5f/package/kernel/mac80211/files/lib/netifd/wireless/mac80211.sh#L604 https://github.com/lede-project/source/blob/96f4792fdb036ecf5c8417fce6503412b0b27e5f/package/kernel/mac80211/files/lib/netifd/wireless/mac80211.sh#L617 I think multicast-to-unicast is already enabled by default in OpenWrt/LEDE, but do check. Do note that most contention and wastage should probably be caused by the clients and not the AP. The air time savings, if any, should come from the following factors: * beacon, probe response, ACK, power saving and other management frames, * less general overhead (like preamble and spacing), * reduced bandwidth causing less interference to neighboring channels and vice versa, also allowing for 4 channels, * working around ill rate schedulers that interpret loss as noise instead of interference (faster retries, fail instead of 1M, smaller probability of further collision), * reduce range (against sticky clients), thereby facilitate higher average rate and lower average air time. The symbols corresponding to the preambles are transmitted at a fixed low rate, and not the basic rate. I.e., if you set the mandatory/basic rates of your AP to only contain 54Mb, other stations could still decode the length of the transmission so they can refrain from medium access for that duration. Thus increasing the basic rate from 6 to 12Mb would not have any negative effect on this aspect. Decoding of the beacons is anyway only useful for associated (or associating) stations, not for outsiders (except for some cool new IE's). https://www.revolutionwifi.net/revolutionwifi/2011/03/understanding-wi-fi-carrier-sense.html https://mrncciew.com/2014/10/14/cwap-802-11-phy-ppdu/ https://flylib.com/books/en/2.519.1/erp_physical_layer_convergence_plcp_.html http://divdyn.com/so-called-ghost-frames-not-exist/ Anyway, if a station couldn't decode a frame, they can still use the CCA energy detector that is about 15-20dB less sensitive. If the exposed node problem also shows in the deployment in question, this could actually be an advantage. I think that the single frequency backhaul should be the one mostly being hidden from many of the clients, so many decode errors and collisions would happen against this link too, not between stations that are very close to each other around a cabin. A wilder guess is that stations between the two cabins could be interfering with each other a bit more. For maximal wireless power saving, it is best to always use the highest, single-stream modulation possible preferably on a wide channel, and definitely not the lowest rates. This is because it is equally true for the chipset, radio DSP and other supporting hardware that they consume a fairly constant power over time, so you should operate them for the shortest amount of time possible for both transmission and reception. http://www.ruf.rice.edu/~mobile/elec518/lectures/3-wireless.pdf http://static.usenix.org/event/hotpower/tech/full_papers/Halperin.pdf http://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-pathak.pdf The record is 14.5 uW @ 1Mb/s and 59.2 uW @ 11Mb/s for backscatter, but the cost/benefit ratio still favors the faster speeds (i.e., disregarding many power hungry components): https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-kellogg.pdf There is one exception: I usually see a bit lower power requirement in datasheets corresponding to 802.11b rates and a few simpler modulation schemes. However, the overall system energy use will probably be greater using these slow rates for the same number of bits transferred. http://cdn.viaembedded.com/eol_products/docs/vnt6656/datasheet/VIA+VNT6656_datasheet_v130306.pdf https://www.ti.com/pdfs/bcg/80211_wp_lowpower.pdf There can exist pathological cases involving very short packets not filling up the number of constructed symbols efficiently, but I guess these should not skew the statistics a lot. Also, many wifi chipsets have a calibration table describing the maximal TX power per rate at which the output signal is clean enough. Higher rates usually allow for a lower maximal TX power, so as you increase rate, you may sometimes need to reduce TX power as well, thus reducing power consumption a bit. http://www.seeedstudio.com/document/word/WT8266-S1%20DataSheet%20V1.0.pdf I've also noticed iw station dump indicating that many devices idle at very low rates, but isn't this just because of the power saving packets and management frames? I don't think that they reduce rate on purpose to save power. It is easy to check this with Wireshark, though. Cheers On Tue, Jun 12, 2018 at 5:22 PM, Jannie Hanekom <jannie@hanekom.co.za> wrote: > Disclaimer: what I know about low-level WiFi is perhaps somewhat dangerous, > and I'm certainly not a developer. I have however implemented a few > corporate wireless solutions by different vendors, and have mucked about > with a number of personal OpenWRT projects over the past decade. > >> option dtim_period 5 # cheap power saving > I'm told Apple suggests 3. I'm not sure why. As a corporate wireless guy, > I trust Andrew von Nagy on that advice: > https://twitter.com/revolutionwifi/status/725489216768106496 > >> option beacon_int 200 # power/bandwidth saving > Additional suggestion: Beacon Interval is a per-SSID setting. Consider > leaving it at defaults (100) for "client-facing" SSIDs and set it to higher > values for your Mesh SSIDs. Just in case of compatibility issues... (I'm > not aware of any, but I've never really tried.) > >> legacy_rates=0 seems to be an alias for enabling all g-rates and disabling >> b-rates > Just my 2c to the $1,000,000 already contributed: Absolutely go for it. > I've never had any issues disabling legacy 11b rates in the corporate and > hospitality world, or on my personal projects. It's one of the first things > I disable on any project I undertake. > > Also look at mcast_rate and/or multicast_to_unicast. Multicasts are - by > default - supposed to be sent at the lowest basic rate IIRC, just like > beacons. There shouldn't be much multicast on most networks in terms of > volume, but things like mDNS do exist and are quite prevalent. Depending on > what you find when you sniff, there may be merit to tinkering with those. > > I have no direct experience of this, but I'm told that one should be careful > not to set the slowest basic_rate setting too high (i.e. higher than 6Mbps.) > Reason is that if you have a client (or another AP) seeing the signal at > -80dB, they may still be able to decode a 6Mbps beacon and apply normal WiFi > co-existence niceties, but they may not be able to decode a 12Mbps beacon > resulting in them identifying the signal as non-WiFi, causing them to back > off more aggressively. > > The other reason is that many devices select the lowest basic rate under > sleep conditions in order to save battery power. I'm not sure what the > impact would be if one sets a much higher basic rate. > > From reading of OpenWRT forum posts over the years, most people who set the > basic_rate higher than 6Mbps do so in an attempt to get rid of "sticky" > clients. I can't remember the rationale exactly, but setting the basic_rate > higher is unlikely to address that problem, and one should rather rely on > other mechanisms. > > Also, beyond 6Mbps, the airtime gains through reducing the time it takes to > transmit beacons diminishes greatly. Nice calculator: > http://www.revolutionwifi.net/revolutionwifi/p/ssid-overhead-calculator.html. > > Jannie > ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [Make-wifi-fast] mesh deployment with ath9k driver changes 2018-06-30 19:26 ` bkil @ 2018-06-30 20:04 ` Jannie Hanekom 0 siblings, 0 replies; 56+ messages in thread From: Jannie Hanekom @ 2018-06-30 20:04 UTC (permalink / raw) To: 'bkil', 'Jannie Hanekom', make-wifi-fast That's awesome :-) Thanks for the *incredibly* carefully-crafted response - particularly for the references. Learning more from that than from years of reading mundane forum posts. Will take a while to digest. -----Oorspronklike Boodskap----- Van: bkil.hu@gmail.com [mailto:bkil.hu@gmail.com] Namens bkil Gestuur: Saturday, 30 June 2018 21:26 Aan: Jannie Hanekom <jannie@hanekom.co.za>; make-wifi-fast@lists.bufferbloat.net Onderwerp: Re: [Make-wifi-fast] mesh deployment with ath9k driver changes Dear Jannie, Thanks for the words of caution. I've just now noticed that you've only sent your reply to me, but let me forward it to the list as well. I just made that dtim_period example up. I don't have a hard recommendation about it other than the more you increase it the more power you save (up to a point when your devices loose sync) and at the same time the more broadcast latency you add, so producing a dtim at least once a second is probably a good idea. That isn't far from the recommendation you suggest. Indeed there can exist defective devices sensitive to beacon_int, though I wouldn't worry too much for such a small increment. For example, on a passive scanning device, I've noticed that the time of discovery increased from 1-2 second to up to 10, but all was fine otherwise. mcast_rate is consumed by wpa_supplicant and only applies to IBSS and 11s configurations: https://github.com/lede-project/source/blob/96f4792fdb036ecf5c8417fce6503412b0b27e5f/package/kernel/mac80211/files/lib/netifd/wireless/mac80211.sh#L604 https://github.com/lede-project/source/blob/96f4792fdb036ecf5c8417fce6503412b0b27e5f/package/kernel/mac80211/files/lib/netifd/wireless/mac80211.sh#L617 I think multicast-to-unicast is already enabled by default in OpenWrt/LEDE, but do check. Do note that most contention and wastage should probably be caused by the clients and not the AP. The air time savings, if any, should come from the following factors: * beacon, probe response, ACK, power saving and other management frames, * less general overhead (like preamble and spacing), * reduced bandwidth causing less interference to neighboring channels and vice versa, also allowing for 4 channels, * working around ill rate schedulers that interpret loss as noise instead of interference (faster retries, fail instead of 1M, smaller probability of further collision), * reduce range (against sticky clients), thereby facilitate higher average rate and lower average air time. The symbols corresponding to the preambles are transmitted at a fixed low rate, and not the basic rate. I.e., if you set the mandatory/basic rates of your AP to only contain 54Mb, other stations could still decode the length of the transmission so they can refrain from medium access for that duration. Thus increasing the basic rate from 6 to 12Mb would not have any negative effect on this aspect. Decoding of the beacons is anyway only useful for associated (or associating) stations, not for outsiders (except for some cool new IE's). https://www.revolutionwifi.net/revolutionwifi/2011/03/understanding-wi-fi-carrier-sense.html https://mrncciew.com/2014/10/14/cwap-802-11-phy-ppdu/ https://flylib.com/books/en/2.519.1/erp_physical_layer_convergence_plcp_.html http://divdyn.com/so-called-ghost-frames-not-exist/ Anyway, if a station couldn't decode a frame, they can still use the CCA energy detector that is about 15-20dB less sensitive. If the exposed node problem also shows in the deployment in question, this could actually be an advantage. I think that the single frequency backhaul should be the one mostly being hidden from many of the clients, so many decode errors and collisions would happen against this link too, not between stations that are very close to each other around a cabin. A wilder guess is that stations between the two cabins could be interfering with each other a bit more. For maximal wireless power saving, it is best to always use the highest, single-stream modulation possible preferably on a wide channel, and definitely not the lowest rates. This is because it is equally true for the chipset, radio DSP and other supporting hardware that they consume a fairly constant power over time, so you should operate them for the shortest amount of time possible for both transmission and reception. http://www.ruf.rice.edu/~mobile/elec518/lectures/3-wireless.pdf http://static.usenix.org/event/hotpower/tech/full_papers/Halperin.pdf http://eurosys2011.cs.uni-salzburg.at/pdf/eurosys2011-pathak.pdf The record is 14.5 uW @ 1Mb/s and 59.2 uW @ 11Mb/s for backscatter, but the cost/benefit ratio still favors the faster speeds (i.e., disregarding many power hungry components): https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-kellogg.pdf There is one exception: I usually see a bit lower power requirement in datasheets corresponding to 802.11b rates and a few simpler modulation schemes. However, the overall system energy use will probably be greater using these slow rates for the same number of bits transferred. http://cdn.viaembedded.com/eol_products/docs/vnt6656/datasheet/VIA+VNT6656_datasheet_v130306.pdf https://www.ti.com/pdfs/bcg/80211_wp_lowpower.pdf There can exist pathological cases involving very short packets not filling up the number of constructed symbols efficiently, but I guess these should not skew the statistics a lot. Also, many wifi chipsets have a calibration table describing the maximal TX power per rate at which the output signal is clean enough. Higher rates usually allow for a lower maximal TX power, so as you increase rate, you may sometimes need to reduce TX power as well, thus reducing power consumption a bit. http://www.seeedstudio.com/document/word/WT8266-S1%20DataSheet%20V1.0.pdf I've also noticed iw station dump indicating that many devices idle at very low rates, but isn't this just because of the power saving packets and management frames? I don't think that they reduce rate on purpose to save power. It is easy to check this with Wireshark, though. Cheers On Tue, Jun 12, 2018 at 5:22 PM, Jannie Hanekom <jannie@hanekom.co.za> wrote: > Disclaimer: what I know about low-level WiFi is perhaps somewhat > dangerous, and I'm certainly not a developer. I have however > implemented a few corporate wireless solutions by different vendors, > and have mucked about with a number of personal OpenWRT projects over the past decade. > >> option dtim_period 5 # cheap power saving > I'm told Apple suggests 3. I'm not sure why. As a corporate wireless > guy, I trust Andrew von Nagy on that advice: > https://twitter.com/revolutionwifi/status/725489216768106496 > >> option beacon_int 200 # power/bandwidth saving > Additional suggestion: Beacon Interval is a per-SSID setting. > Consider leaving it at defaults (100) for "client-facing" SSIDs and > set it to higher values for your Mesh SSIDs. Just in case of > compatibility issues... (I'm not aware of any, but I've never really > tried.) > >> legacy_rates=0 seems to be an alias for enabling all g-rates and >> disabling b-rates > Just my 2c to the $1,000,000 already contributed: Absolutely go for it. > I've never had any issues disabling legacy 11b rates in the corporate > and hospitality world, or on my personal projects. It's one of the > first things I disable on any project I undertake. > > Also look at mcast_rate and/or multicast_to_unicast. Multicasts are - > by default - supposed to be sent at the lowest basic rate IIRC, just > like beacons. There shouldn't be much multicast on most networks in > terms of volume, but things like mDNS do exist and are quite > prevalent. Depending on what you find when you sniff, there may be merit to tinkering with those. > > I have no direct experience of this, but I'm told that one should be > careful not to set the slowest basic_rate setting too high (i.e. > higher than 6Mbps.) Reason is that if you have a client (or another > AP) seeing the signal at -80dB, they may still be able to decode a > 6Mbps beacon and apply normal WiFi co-existence niceties, but they may > not be able to decode a 12Mbps beacon resulting in them identifying > the signal as non-WiFi, causing them to back off more aggressively. > > The other reason is that many devices select the lowest basic rate > under sleep conditions in order to save battery power. I'm not sure > what the impact would be if one sets a much higher basic rate. > > From reading of OpenWRT forum posts over the years, most people who > set the basic_rate higher than 6Mbps do so in an attempt to get rid of "sticky" > clients. I can't remember the rationale exactly, but setting the > basic_rate higher is unlikely to address that problem, and one should > rather rely on other mechanisms. > > Also, beyond 6Mbps, the airtime gains through reducing the time it > takes to transmit beacons diminishes greatly. Nice calculator: > http://www.revolutionwifi.net/revolutionwifi/p/ssid-overhead-calculator.html. > > Jannie > ^ permalink raw reply [flat|nested] 56+ messages in thread
end of thread, other threads:[~2018-07-10 7:02 UTC | newest] Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-04-24 8:33 [Make-wifi-fast] mesh deployment with ath9k driver changes Pete Heist 2018-04-24 11:54 ` Toke Høiland-Jørgensen 2018-04-24 13:37 ` Pete Heist 2018-04-24 13:51 ` Toke Høiland-Jørgensen 2018-04-24 14:09 ` Pete Heist 2018-04-24 14:34 ` Toke Høiland-Jørgensen 2018-04-24 19:10 ` Pete Heist 2018-04-24 21:32 ` Toke Høiland-Jørgensen 2018-04-25 6:05 ` Pete Heist 2018-04-25 6:36 ` Sebastian Moeller 2018-04-25 17:17 ` Pete Heist 2018-04-26 0:41 ` David Lang 2018-04-26 19:40 ` Pete Heist 2018-04-26 0:38 ` David Lang 2018-04-26 21:41 ` Pete Heist 2018-04-26 21:44 ` Sebastian Moeller 2018-04-26 21:56 ` Pete Heist 2018-04-26 22:04 ` David Lang 2018-04-26 22:47 ` Pete Heist 2018-04-27 10:15 ` Toke Høiland-Jørgensen 2018-04-27 10:32 ` Pete Heist 2018-04-26 0:35 ` David Lang 2018-04-27 11:42 ` Valent Turkovic 2018-04-27 11:50 ` Pete Heist 2018-04-27 11:59 ` Valent Turkovic 2018-04-27 12:17 ` Pete Heist 2018-04-27 11:47 ` Valent Turkovic 2018-04-27 12:00 ` Pete Heist 2018-05-19 16:03 bkil 2018-05-20 18:56 ` Pete Heist 2018-05-31 0:52 ` David Lang 2018-06-08 9:37 ` Pete Heist 2018-06-09 15:32 ` bkil 2018-06-13 13:07 ` Pete Heist 2018-06-13 13:24 ` Toke Høiland-Jørgensen 2018-06-13 16:01 ` Pete Heist 2018-06-30 19:14 ` bkil 2018-07-04 21:47 ` Pete Heist 2018-07-05 13:08 ` Toke Høiland-Jørgensen 2018-07-05 17:26 ` Pete Heist 2018-07-05 17:37 ` Toke Høiland-Jørgensen 2018-07-05 18:02 ` Pete Heist 2018-07-05 20:17 ` Jonathan Morton 2018-07-09 2:20 ` Aaron Wood 2018-07-09 5:17 ` Jonathan Morton 2018-07-09 6:27 ` Pete Heist 2018-07-09 12:55 ` Sebastian Moeller 2018-07-09 23:21 ` Pete Heist 2018-07-09 5:13 ` David Lang 2018-07-09 23:33 ` Pete Heist 2018-07-10 0:39 ` Pete Heist 2018-07-10 7:02 ` bkil 2018-06-13 16:30 ` Sebastian Moeller 2018-06-13 17:50 ` Toke Høiland-Jørgensen [not found] ` <CADuVhRWL2aVjzjfLHg1nPFa8Ae-hWrGrE7Wga4eUKon3oqoTXA@mail.gmail.com> 2018-06-30 19:26 ` bkil 2018-06-30 20:04 ` Jannie Hanekom
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox