[Make-wifi-fast] Flent test hardware

Isaac Konikoff isaac.konikoff at gmail.com
Mon Nov 6 15:15:51 EST 2017

You can run flent/iperf/netperf client and server on the same box using a
candelatech kernel and then bind to specific interfaces.


flent example:
eth1 to DUT(AP LAN side)
wlan0 to DUT(AP wireless)

flent -H --local-bind --swap-up-down -x
tcp_download -l 120

iperf example:

iperf upload test
iperf -s -B -i10
iperf -c -B -i10 -t120

iperf download test
iperf -s -B -i10
iperf -c -B -i10 -t120

On Sun, Nov 5, 2017 at 5:57 AM, Pete Heist <peteheist at gmail.com> wrote:

> On Nov 5, 2017, at 2:42 AM, Bob McMahon <bob.mcmahon at broadcom.com> wrote:
> I have some brix with realtek and run ptpd installed with fedora 25.
> The corrections are in the 25 microsecond range, though there are
> anomalies.  These are used for wifi DUTs that go into RF enclosures.
> [root at hera ~]# tail -n 1 /var/log/ptpd2.stats
> 2017-11-04 18:33:46.723476, slv, 0cc47afffea87386(unknown)/1,
> 0.000000000, -0.000018381,  0.000000000, -0.000018463, 1528.032750001, S,
> 0.000000000, 0, -0.000018988, 1403, 1576, 17, -0.000018463,  0.000000000
> For LAN/WAN traffic, I tend to use the intel quad server adapters in a
> supermicro mb desktop with 8 or more real cores.  (I think the data center
> class machines are worth it.)
> Thanks for the info. I was wondering how large the PTP error would be with
> software timestamps, and I see it’s not bad for most purposes.
> Which Realtek Linux driver does your brix use, and is it stable? The r8169
> driver’s BQL support was reverted at some point and it doesn’t look like
> that has changed.
> I trust that the extra cores can help, particularly for tests with high
> flow counts, but my project budget won’t allow it, and used hardware is too
> much to think about at the moment.
> Do you (or anyone) know of any problems with running the Flent client and
> server on the same box? In the case of the Proliant Microserver, the
> Broadcom 5720 adapter should have separate PCI data paths for each NIC. I
> guess the bottleneck will still mainly be the CPU. To get some idea of
> what's possible on my current hardware, I tried running rrul_be_nflows
> tests with the Flent client and server on the same box, through its local
> adapter (with MTU set to 1500) with my current Mac Mini (2.26 GHz Core2 Duo
> P7550). I know that doesn’t predict how it will work over Ethernet, but
> it’s a start.
> https://docs.google.com/spreadsheets/d/1MVxGsreiGKNXhfkMIheNFrH_
> GVllFfiH9RU5ws5l_aY/edit#gid=1583696271
> Although total throughput is pretty good for a low-end CPU, I’m not sure
> I’d trust the results above 64/64 flows. 256/256 flows was an epic fail,
> but I won’t be doing that kind of test.
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20171106/58c0135b/attachment.html>

More information about the Make-wifi-fast mailing list