[Make-wifi-fast] Flent results for point-to-point Wi-Fi on LEDE/OM2P-HS available
dave.taht at gmail.com
Mon Jan 30 18:21:40 EST 2017
Very exhaustive work, thank you.
A backstory of how I got involved in the bufferbloat effort was that I
deployed some shiny "new" and "faster" wireless-n radios (6 years
ago)... and my WISP network in Nicaragua collapsed in rain - which was
about 6 months after I'd made the cutover from g and from motorola's
radios. The signal strength was within expected parameters, the
achieved rates were half to 1/3 what they were dry, but latencies
climbed to over 30 seconds in some cases. I had *no* idea what was
going wrong at the time, and it wasn't until 6 months after I closed
the business that I ran into Jim Gettys, and the rest is history.
I never got around to writing it up, I just gave a couple talks on the
subject and moved onto fixing bufferbloat thoroughly. We got
distracted by trying to solve it on the ISP backbones before tackling
wifi in this past year.
OpenWrt Chaos calmer generally should work "pretty good" on 802.11n
p2p links with the fq_codel qdisc at the qdisc layer, and a bit of
tuning of qlen_be (I deployed 12-24 on the yurtlab campus), and
turning off 802.11e. Lacking ATF and good queue management closer to
the radio, p2mp did not work well with chaos calmer.
with the new stuff now landing in lede I still recommend turning off
802.11e and relying entirely on fq_codel down there, and I have high
hopes that basic p2mp will work well for up to X stations.
... As for the performance of openmesh being pretty good... they were
the first group to test the fq_codel intermediate queues and ATF code
way back in september, :) - it's not clear to me if that's what you
were testing or not or they shipped an update yet.
Your 20-30ms p2p latency performance on the rrul_be test is consistent
with the results we get from the fq_codel intermediate queue patches,
which keep 2 ~4ms aggregates near the hardware. Applying a qdisc on
top should make little to no difference, as all the real work gets
done there. 
A good test to run is to lower the mcs rates (set the rateset
manually, or add distance, and/or rain) and to see how flat latency
remains. This is also a better test of real-world conditions, if you
can get some reports back on the actual mcs rates being achieved in
the field, and use those.
It would be my hope that 802.11e is off (rrul will show this, and we
still do badly with it on)
You can probably within this deployment shape the uplinks to some
fairly low value and get good performance more the time.
I do not have any real hope for being able to make wifi better with
soft-shapers. It's a gordian knot - you need to respond rapidly to
changes in rate, both up and down, and mix up flows thoroughly,
optimize your aggregates to go to one station at a time and switch to
the next, and that's what we got with codel close to the hardware as
it is now in lede. 
If it helps any, this is the best later talk I've given on these subjects:
UBNT's gear (commonly used by wisps) has some neat tricks to manage
things better, when I last took apart their qos system it was an
insane set of sfqs with sfqs.
 And as we have more radio-dense environments now, retries are
becoming an issue.
More information about the Make-wifi-fast