[Cake] [Make-wifi-fast] Flent results for point-to-point Wi-Fi on LEDE/OM2P-HS available

Dave Taht dave.taht at gmail.com
Mon Jan 30 18:55:59 EST 2017


* Question #16: Is there any other testing anyone would like to see
while I have this rig up?

Please!

1) ECN on on both sides.
2) A voip test
3) P2MP (3 or more stations, rtt_fair_var* tests)
4) Lowered MCS rates or distance or rain

Of these, the last will improve the accuracy and relevance of your
testbench results the most.

* Question #15: FreeNet's link to the Internet is 1 Gbps, and AFAIK
does not experience saturation. The e1000 ethernet driver that the
Internet router uses supports BQL. Is there any sense in running
fq_codel or similar on the router, if it does not become saturated?

You don't need queue management until you need queue management. A
basic flaw of mrtg's design in your graph here is that it takes
samples in 5 minute windows. If you are totally nuking your link for 1
sec out of 5 and the rest nearly idle, you won't see it. A flent test
in their offices to somewhere nearby over that link will be revealing.

In general, applying fq_codel on a BQL enabled system is always a
goodness, it costs next to nothing in CPU to do it that way.
(outbound). Depending on your measurement of what happens on inbound,
you might want to do inbound rate shaping....

* Question #14: Not directly related to this testing, how might
Google's new BBR Congestion Control change the approach to managing
bufferbloat in general?

I don't know. I've shown that BBR interacts badly with single queued
AQMs, but fq_codel and cake deal with it without issues.

http://blog.cerowrt.org/flent/bbr-comprehensive/ has some early tests
in it, but I wouldn't call that dataset definitive. In particular on
some tests I was only running BBR in one direction and not the other.

* Question #13: The gains found here are not as high as they typically
are for ADSL, where I have seen a 10x improvement in rtt under load.
The main question- is it worth the extra effort to do rate limiting
and queue management for point-to-point Wi-Fi? Will the gains be
greater when I test with Ubiquiti Wi-Fi hardware, which may or may not
have queue management built in to the driver? Hopefully I'll have more
information on that last question soon.

The gains are relative to the bandwidth and the amount of fixed
buffering in the radio. For example, I can get 320Mbits/sec out of the
archer c7v2's ath10k, with 10ms latency, at 8 feet. 20 feet, and
through a wall, it's 200Mbit/100ms latency. I don't like the initial
shape of that curve! (what I typically see happen on 802.11ac is it
hitting a wall and not working at all)

I happen to like ubnt's gear, although their default firmware only
lasts 5 minutes for me these days. They have very good throughput and
latency characteristics with decent signal strength. I look forward to
a benchmark of what you can get from them. (I am still looking for a
good upgrade from the nanostation M5s)

Question #12: What's the reason for Cake's occasional sudden
throughput shifts, and why do its latencies tend to be higher than
fq_codel?

We are still debugging cake. Recently the API got a bit wacked. The
AQM is tuned for DSL speeds. More data is needed.

* Question #10: I expected the latency to get at least slightly lower
with a lower target/interval. What's the reason it doesn't?

Queue mostly controlled in the radio on your tests, I think

* Question #11: In a multi-hop Wi-Fi link, should target and interval
be tuned only for the current link, or for the nature of the traffic
being carried? In other words, 5ms / 100ms is the default, and
typically recommended for Internet traffic. Should that be kept for a
single, lower-latency intermediate link in an ISPs backhaul because
it's carrying Internet traffic, or should lower values theoretically
be used?

Very good question. For a p2p link terminating two offices, I can see
trying to tune it lower. Don't know. Keep it where it is for "to the
internet". Also multi-hop wifi has only been somewhat explored in a
couple papers so far, my hope was to tackle it again after everything
stablized....

* On the pfifo analysis - I really enjoyed the rush song and it was
very appropriate for how pfifo misbehaves!

* Question #9: Does my assertion make sense, that it's "better" to do
half-duplex queueing on only one end of the link? The side towards the
Internet?

Usually the stuff coming from the internet dominates the traffic by a
8-20 figure, so coming from might be a worthwhile place to shape.

I ran out of time before tackling 8-1....

On Mon, Jan 30, 2017 at 1:21 PM, Pete Heist <peteheist at gmail.com> wrote:
> Hi, I’ve posted some Flent results and analysis for point-to-point Wi-Fi
> using LEDE on OM2P-HS (ath9k):
>
> http://www.drhleny.cz/bufferbloat/wifi_bufferbloat.html
>
> Over 500 runs were done using different configurations, as the effects of
> various changes were explored. In case someone has the time to respond,
> there are a number of questions in red. Whatever feedback I get I’ll try to
> incorporate into a final document. Also, if I’ve made any false assertions
> I’d love to hear about it.
>
> As described on the page, I’m doing this to try to help my cooperative WISP
> here in Czech, and also in case it helps the Bufferbloat project somehow.
> Although much of the new Wi-Fi work is occurring at the driver level, I hope
> it’s not misguided to still explore using the qdisc layer for Wi-Fi,
> particularly in cases where a new Wi-Fi driver can’t be deployed.
>
> These first results are from LEDE on the OM2P-HS, but I hope to get some
> test devices from my WISP soon to test the hardware they’re using (Ubiquiti
> Wi-Fi devices with custom Voyage Linux routers running on a PCengines APU).
>
> Regards,
> Pete
>
>
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast



-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org


More information about the Cake mailing list