[Make-wifi-fast] Flent results for point-to-point Wi-Fi on LEDE/OM2P-HS available

Pete Heist peteheist at gmail.com
Thu Feb 9 03:35:02 EST 2017

> On Feb 8, 2017, at 5:35 PM, Dave Taht <dave.taht at gmail.com> wrote:
> On Wed, Feb 8, 2017 at 8:11 AM, Toke Høiland-Jørgensen <toke at toke.dk <mailto:toke at toke.dk>> wrote:
>> It's not so much running the test *from* the router, as it is having the
>> server component (netserver) run on a router to test. To do that it'll
>> have to be C, basically...
> I note that I usually run tests *through* the router rather than *to*
> the router, and most of my targets for this have evolved to be arm
> (odroid c2), and x86 platforms, so I could get past a gbit.

Me too so far. The OM2P-HS (520 MHz MIPS 74Kc, similar to the NSM5 I’m about to test), was unable to max out the link rate even with iperf3, so I knew I had to run flent on separate hardware. Right now I have a 4 node config:

flent-client/router — station — AP — router/flent-server

When I test the Alix router, which is pretty low end, I’ll go to a 6 node config because I don’t think it will bear running flent while routing and managing the queue:

flent-client — router — station — AP — router — flent-server

It seems that flent should be run on sufficiently overpowered hardware (basically any even semi-modern Intel hardware) relative to what’s being tested. Maybe there are cases where it needs to run on embedded devices that I’m not thinking of.

> Of note I've always regarded exposing d-itg and some sort of monotonic
> voip-like test to the internet as too dangerous without a solid 3 way
> handshake, and leveraging existing voip and webrtc/videoconferencing
> servers (like freeswitch) way too hard to setup and measure. (there
> are plenty of tools to generate rtp-like packets).

I haven’t gotten d-itg to run so far. I haven’t looked into it much, but for some reason it seems like ITGRecv only wants to listen for signaling (port 9000) on tcp6, even with an explicit bind address:

sysadmin at mbp:~$ sudo ITGRecv -a
ITGRecv version 2.8.1 (r1023)
Compile-time options: sctp dccp bursty multiport
Press Ctrl-C to terminate
sysadmin at mbp:~$ netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 localhost:domain*               LISTEN     
tcp        0      0   *               LISTEN     
tcp        0      0 mbp:ssh               ESTABLISHED
tcp        0    172 mbp:ssh               ESTABLISHED
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN     
tcp6       0      0 [::]:12865              [::]:*                  LISTEN     
tcp6       0      0 [::]:9000               [::]:*                  LISTEN     
udp        0      0 localhost:domain*                          
udp        0      0*                          

> I am unfond of the current ping and udp tests in the rrul component
> because they don't look like these traffic types, and drawing
> conclusions from their behavior as if they were is wrong. Also, with
> very low RTTs, the measurement flows tend to skew the tests - you'll
> consistently see "lower bandwidth" measured when you cut an RTT from
> 100 to 1ms via active queue management, when what is really happening
> is 100x more measurement traffic.

I’ve been wondering about this- how seriously to take the rtt from rrul tests in general. When I’m making config changes that change the rtt by a few milliseconds here or there, I’m not convinced I’m not missing other externalities, which are so far are probably only accessible by reading the tea leaves. So for example, when sfq and fq_codel seem quite close in my half duplex rate limiting results, at least in terms of average rtt, are they really? Aren't there test cases where I can better demonstrate that fq_codel is “better” than sfq?

Ultimately it will come down to how well fq_codel (or Cake) behaves in the real world, and I’m not even sure how to measure that. In FreeNet, I can get regular ICMP ping results between their routers and look at them after a day’s time, for example, but will I really have everything I need to know I’ve made a difference? And furthermore, if Ubiquiti is prioritizing ICMP (I intend to find that out), are those ICMP results even useful? I may have to measure RTT with regular best-effort UDP packets between the routers, and even then, I want to know when I see latency spikes whether I’m looking at bufferbloat or radio related issues, so I’ll need other stats as well. They have Ubiquiti’s airControl deployed, but my sense is that it’s going to be easy to deploy fq_codel, and not very easy to demonstrate if and how it’s helped!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20170209/6a920709/attachment-0001.html>

More information about the Make-wifi-fast mailing list