[Cerowrt-devel] performance numbers from WRT1200AC (Re: Latest build test - new sqm-scripts seem to work; "cake overhead 40" didn't)

Dave Taht dave.taht at gmail.com
Fri Jun 26 14:24:43 EDT 2015


Mikael, a simple test of the analysis I just did would be to use
ethtool to set your server to 100mbits (ethtool -s
your_ethernet_device advertise 0x008 and turn on fq_codel on both the
client and server.

if it is using classification for the hw mq, the rrul test from the
client will blow up half the queues. If it is doing hw fq instead,
the rrul_50_up test will blow up all 8 queues.

For giggles, try 0x002 (10mbit)

my own attempts to get a compile for this platform consistently blow
up here similarly for both musl and uclibc.

checking for arm-openwrt-linux-muslgnueabi-gcc...
/build/cero3/src/ac1900/build_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/gcc-linaro-4.8-2014.04-minimal/./gcc/xgcc
-B/build/cero3/src/ac1900/build_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/gcc-linaro-4.8-2014.04-minimal/./gcc/
-B/build/cero3/src/ac1900/staging_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/arm-openwrt-linux-muslgnueabi/bin/
-B/build/cero3/src/ac1900/staging_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/arm-openwrt-linux-muslgnueabi/lib/
-isystem /build/cero3/src/ac1900/staging_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/arm-openwrt-linux-muslgnueabi/include
-isystem /build/cero3/src/ac1900/staging_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/arm-openwrt-linux-muslgnueabi/sys-include
checking for suffix of object files... configure: error: in
`/build/cero3/src/ac1900/build_dir/toolchain-arm_cortex-a9+vfpv3_gcc-4.8-linaro_musl-1.1.10_eabi/gcc-linaro-4.8-2014.04-minimal/arm-openwrt-linux-muslgnueabi/libgcc':
configure: error: cannot compute suffix of object files: cannot compile







On Fri, Jun 26, 2015 at 10:04 AM, Dave Taht <dave.taht at gmail.com> wrote:
> On Fri, Jun 26, 2015 at 9:35 AM, Jonathan Morton <chromatix99 at gmail.com> wrote:
>> These would be hardware tail drops - there might not be a physical counter
>> recording them. But you could instrument three driver to see whether the
>> receive buffer is full when serviced.
>
> from drivers/net/ethernet/marvel/mvneta.c:
>
> /* Max number of Rx descriptors */
> #define MVNETA_MAX_RXD 128
>
> this is probably too small, especially given the 64 it is willing to
> wait for. At the same time, it is too large, as there are 8 hardware
> queues in play here. So you get a huge burst from one flow, it gros it
> all together.... aggghh...
>
> /* Max number of Tx descriptors */
> #define MVNETA_MAX_TXD 532
>
> this realllllly needs BQL. Same problem(s). Only worse.
>
>
>>
>> - Jonathan Morton
>>
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>
>
>
> --
> Dave Täht
> worldwide bufferbloat report:
> http://www.dslreports.com/speedtest/results/bufferbloat
> And:
> What will it take to vastly improve wifi for everyone?
> https://plus.google.com/u/0/explore/makewififast



-- 
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast



More information about the Cerowrt-devel mailing list