My main question was why BQL support is not in the driver. I've seen it listed on the bloated drivers page for a long time but no real explanation why it remains so. I guess no interested party has the equipment to test and verify if it works correctly. I will try to compile my own kernel with the patch applied( current one has no codel compiled in) and see if any issues occur. I know that TSO support is totally broken with this driver (see: http://archlinuxarm.org/forum/viewtopic.php?f=9&t=7692&start=20 ). If only it didn't take a day to compile :\. On Wed, Jul 29, 2015, 10:42 Dave Taht wrote: > On Wed, Jul 29, 2015 at 7:07 PM, David Lang wrote: > > On Wed, 29 Jul 2015, Alan Jenkins wrote: > > > >> On 29/07/15 12:24, Alan Jenkins wrote: > >>> > >>> On 29/07/15 05:32, Rosen Penev wrote: > >>>> > >>>> Anyone know what the situation is with kirkwood and BQL? I found a > >>>> patch for it but have no idea if there are any issues. > >>>> > >>>> I have such a system but have no idea how to ascertain the efficacy of > >>>> BQL. > >>> > >>> > >>> To the latter: > >>> > >>> BQL works for transmissions that reach the full line rate (e.g. for > >>> 1000MB ethernet). It limits the queue that builds in the > driver/device to > >>> the minimum they need. Then queue mostly builds in the generic > networking > >>> stack, where it can be managed effectively e.g. by fq_codel. > >>> > >>> So a simple efficacy test is to run a transmission at full speed, and > >>> monitor latency (ping) at the same time. Just make sure the device > qdisc is > >>> set to fq_codel. fq_codel effectively prioritizes ping, so the > difference > >>> will be very easy to see. > >>> > >>> I don't know if there's any corner cases that want testing as well. > > > > > > BQL adjusts the number of packets that can be queued based on their > size, so > > you can have far more 64 byte packets queued than you can have 1500 byte > > packets. > > > > do a ping flood of your network with different packet sizes and look at > the > > queue lengths that are allowed, the queue length should be much higher > with > > small packets. > > > >>> BQL can be disabled at runtime for comparison testing: > >>> http://lists.openwall.net/netdev/2011/12/01/112 > >>> > >>> There's a BQL tool to see it working graphically (using readouts from > the > >>> same sysfs directory): > >>> https://github.com/ffainelli/bqlmon > >>> > >>> My Kirkwood setup at home is weak, I basically never reach full link > >>> speed. So this might be somewhat academic unless you set the link > speed to > >>> 100 or 10 using the ethtool command. (It seems like a good idea to > test > >>> those speeds even if you can do better though). You probably also > want to > >>> start with offloads (tso, gso, gro) disabled using ethtool, because > they > >>> aggregate packets. > >>> > >> > >> a quick test with a 100M setting, connected to gigabit switch, and flent > >> tcp_download, shows ping under load increases to about 8ms. Conclusion: > the > >> Debian kirkwood kernel probably isn't doing BQL for me :). > > Wrong way I think. Try tcp_upload. > > > > > 8ms of latency under load is doing very well. what are you expecting? > > > > David Lang > > > > > >>> Flent can do this test and generate pretty graphs, including a time > >>> series (plot type "all_scaled") and frequency distribution for the ping > >>> ("ping_cdf"). Flent is a frontend to the netperf network performance > >>> tester. You could use a directly connected laptop and run your own > netperf > >>> server (netserver command). You'll need to set up static IPs on both > ends > >>> for the duration... if headless then make sure you have an alternative > >>> console access :). > >>> > >>> The normal Flent test is RRUL, which is two-way. tcp_2up would be > >>> better, to avoid testing both end's BQL at the same time. If you want > to > >>> run tcp_2up the other way round, so you only need netserver on the > ARM, try > >>> using '--swap-up-down'. > >>> > >>> Alan > >> > >> > >> _______________________________________________ > >> Bloat mailing list > >> Bloat@lists.bufferbloat.net > >> https://lists.bufferbloat.net/listinfo/bloat > >> > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > -- > Dave Täht > worldwide bufferbloat report: > http://www.dslreports.com/speedtest/results/bufferbloat > And: > What will it take to vastly improve wifi for everyone? > https://plus.google.com/u/0/explore/makewififast > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat >