<p dir="ltr">My main question was why BQL support is not in the driver. I've seen it listed on the bloated drivers page for a long time but no real explanation why it remains so. I guess no interested party has the equipment to test and verify if it works correctly. I will try to compile my own kernel with the patch applied( current one has no codel compiled in) and see if any issues occur. I know that TSO support is totally broken with this driver (see: <a href="http://archlinuxarm.org/forum/viewtopic.php?f=9&t=7692&start=20">http://archlinuxarm.org/forum/viewtopic.php?f=9&t=7692&start=20</a> ). If only it didn't take a day to compile :\.</p>
<br><div class="gmail_quote"><div dir="ltr">On Wed, Jul 29, 2015, 10:42 Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, Jul 29, 2015 at 7:07 PM, David Lang <<a href="mailto:david@lang.hm" target="_blank">david@lang.hm</a>> wrote:<br>
> On Wed, 29 Jul 2015, Alan Jenkins wrote:<br>
><br>
>> On 29/07/15 12:24, Alan Jenkins wrote:<br>
>>><br>
>>> On 29/07/15 05:32, Rosen Penev wrote:<br>
>>>><br>
>>>> Anyone know what the situation is with kirkwood and BQL? I found a<br>
>>>> patch for it but have no idea if there are any issues.<br>
>>>><br>
>>>> I have such a system but have no idea how to ascertain the efficacy of<br>
>>>> BQL.<br>
>>><br>
>>><br>
>>> To the latter:<br>
>>><br>
>>> BQL works for transmissions that reach the full line rate (e.g. for<br>
>>> 1000MB ethernet). It limits the queue that builds in the driver/device to<br>
>>> the minimum they need. Then queue mostly builds in the generic networking<br>
>>> stack, where it can be managed effectively e.g. by fq_codel.<br>
>>><br>
>>> So a simple efficacy test is to run a transmission at full speed, and<br>
>>> monitor latency (ping) at the same time. Just make sure the device qdisc is<br>
>>> set to fq_codel. fq_codel effectively prioritizes ping, so the difference<br>
>>> will be very easy to see.<br>
>>><br>
>>> I don't know if there's any corner cases that want testing as well.<br>
><br>
><br>
> BQL adjusts the number of packets that can be queued based on their size, so<br>
> you can have far more 64 byte packets queued than you can have 1500 byte<br>
> packets.<br>
><br>
> do a ping flood of your network with different packet sizes and look at the<br>
> queue lengths that are allowed, the queue length should be much higher with<br>
> small packets.<br>
><br>
>>> BQL can be disabled at runtime for comparison testing:<br>
>>> <a href="http://lists.openwall.net/netdev/2011/12/01/112" rel="noreferrer" target="_blank">http://lists.openwall.net/netdev/2011/12/01/112</a><br>
>>><br>
>>> There's a BQL tool to see it working graphically (using readouts from the<br>
>>> same sysfs directory):<br>
>>> <a href="https://github.com/ffainelli/bqlmon" rel="noreferrer" target="_blank">https://github.com/ffainelli/bqlmon</a><br>
>>><br>
>>> My Kirkwood setup at home is weak, I basically never reach full link<br>
>>> speed. So this might be somewhat academic unless you set the link speed to<br>
>>> 100 or 10 using the ethtool command. (It seems like a good idea to test<br>
>>> those speeds even if you can do better though). You probably also want to<br>
>>> start with offloads (tso, gso, gro) disabled using ethtool, because they<br>
>>> aggregate packets.<br>
>>><br>
>><br>
>> a quick test with a 100M setting, connected to gigabit switch, and flent<br>
>> tcp_download, shows ping under load increases to about 8ms. Conclusion: the<br>
>> Debian kirkwood kernel probably isn't doing BQL for me :).<br>
<br>
Wrong way I think. Try tcp_upload.<br>
<br>
><br>
> 8ms of latency under load is doing very well. what are you expecting?<br>
><br>
> David Lang<br>
><br>
><br>
>>> Flent can do this test and generate pretty graphs, including a time<br>
>>> series (plot type "all_scaled") and frequency distribution for the ping<br>
>>> ("ping_cdf"). Flent is a frontend to the netperf network performance<br>
>>> tester. You could use a directly connected laptop and run your own netperf<br>
>>> server (netserver command). You'll need to set up static IPs on both ends<br>
>>> for the duration... if headless then make sure you have an alternative<br>
>>> console access :).<br>
>>><br>
>>> The normal Flent test is RRUL, which is two-way. tcp_2up would be<br>
>>> better, to avoid testing both end's BQL at the same time. If you want to<br>
>>> run tcp_2up the other way round, so you only need netserver on the ARM, try<br>
>>> using '--swap-up-down'.<br>
>>><br>
>>> Alan<br>
>><br>
>><br>
>> _______________________________________________<br>
>> Bloat mailing list<br>
>> <a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
>> <a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
>><br>
> _______________________________________________<br>
> Bloat mailing list<br>
> <a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
> <a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
<br>
<br>
<br>
--<br>
Dave Täht<br>
worldwide bufferbloat report:<br>
<a href="http://www.dslreports.com/speedtest/results/bufferbloat" rel="noreferrer" target="_blank">http://www.dslreports.com/speedtest/results/bufferbloat</a><br>
And:<br>
What will it take to vastly improve wifi for everyone?<br>
<a href="https://plus.google.com/u/0/explore/makewififast" rel="noreferrer" target="_blank">https://plus.google.com/u/0/explore/makewififast</a><br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</blockquote></div>