[Make-wifi-fast] Flent test hardware
bob.mcmahon at broadcom.com
Mon Nov 6 16:29:27 EST 2017
Just to be clear, speaking for iperf 2, the binding isn't to an interface
but to an IP address. See this for a description
Linux supports SO_BINDTODEVICE but it's not straightforward per things like
ARP so I didn't add support for this.
On Mon, Nov 6, 2017 at 12:15 PM, Isaac Konikoff <isaac.konikoff at gmail.com>
> You can run flent/iperf/netperf client and server on the same box using a
> candelatech kernel and then bind to specific interfaces.
> flent example:
> eth1 192.168.1.2 to DUT(AP LAN side)
> wlan0 192.168.1.3 to DUT(AP wireless)
> flent -H 192.168.1.3 --local-bind 192.168.1.2 --swap-up-down -x
> tcp_download -l 120
> iperf example:
> eth1 192.168.86.103
> wlan0 192.168.86.101
> iperf upload test
> iperf -s -B 192.168.86.103 -i10
> iperf -c 192.168.86.103 -B 192.168.86.101 -i10 -t120
> iperf download test
> iperf -s -B 192.168.86.101 -i10
> iperf -c 192.168.86.101 -B 192.168.86.103 -i10 -t120
> On Sun, Nov 5, 2017 at 5:57 AM, Pete Heist <peteheist at gmail.com> wrote:
>> On Nov 5, 2017, at 2:42 AM, Bob McMahon <bob.mcmahon at broadcom.com> wrote:
>> I have some brix with realtek and run ptpd installed with fedora 25.
>> The corrections are in the 25 microsecond range, though there are
>> anomalies. These are used for wifi DUTs that go into RF enclosures.
>> [root at hera ~]# tail -n 1 /var/log/ptpd2.stats
>> 2017-11-04 18:33:46.723476, slv, 0cc47afffea87386(unknown)/1,
>> 0.000000000, -0.000018381, 0.000000000, -0.000018463, 1528.032750001,
>> S, 0.000000000, 0, -0.000018988, 1403, 1576, 17, -0.000018463, 0.000000000
>> For LAN/WAN traffic, I tend to use the intel quad server adapters in a
>> supermicro mb desktop with 8 or more real cores. (I think the data center
>> class machines are worth it.)
>> Thanks for the info. I was wondering how large the PTP error would be
>> with software timestamps, and I see it’s not bad for most purposes.
>> Which Realtek Linux driver does your brix use, and is it stable? The
>> r8169 driver’s BQL support was reverted at some point and it doesn’t look
>> like that has changed.
>> I trust that the extra cores can help, particularly for tests with high
>> flow counts, but my project budget won’t allow it, and used hardware is too
>> much to think about at the moment.
>> Do you (or anyone) know of any problems with running the Flent client and
>> server on the same box? In the case of the Proliant Microserver, the
>> Broadcom 5720 adapter should have separate PCI data paths for each NIC. I
>> guess the bottleneck will still mainly be the CPU. To get some idea of
>> what's possible on my current hardware, I tried running rrul_be_nflows
>> tests with the Flent client and server on the same box, through its local
>> adapter (with MTU set to 1500) with my current Mac Mini (2.26 GHz Core2 Duo
>> P7550). I know that doesn’t predict how it will work over Ethernet, but
>> it’s a start.
>> Although total throughput is pretty good for a low-end CPU, I’m not sure
>> I’d trust the results above 64/64 flows. 256/256 flows was an epic fail,
>> but I won’t be doing that kind of test.
>> Make-wifi-fast mailing list
>> Make-wifi-fast at lists.bufferbloat.net
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Make-wifi-fast