[Bloat] Kirkwood BQL?
Dave Taht
dave.taht at gmail.com
Wed Jul 29 13:42:17 EDT 2015
On Wed, Jul 29, 2015 at 7:07 PM, David Lang <david at lang.hm> wrote:
> On Wed, 29 Jul 2015, Alan Jenkins wrote:
>
>> On 29/07/15 12:24, Alan Jenkins wrote:
>>>
>>> On 29/07/15 05:32, Rosen Penev wrote:
>>>>
>>>> Anyone know what the situation is with kirkwood and BQL? I found a
>>>> patch for it but have no idea if there are any issues.
>>>>
>>>> I have such a system but have no idea how to ascertain the efficacy of
>>>> BQL.
>>>
>>>
>>> To the latter:
>>>
>>> BQL works for transmissions that reach the full line rate (e.g. for
>>> 1000MB ethernet). It limits the queue that builds in the driver/device to
>>> the minimum they need. Then queue mostly builds in the generic networking
>>> stack, where it can be managed effectively e.g. by fq_codel.
>>>
>>> So a simple efficacy test is to run a transmission at full speed, and
>>> monitor latency (ping) at the same time. Just make sure the device qdisc is
>>> set to fq_codel. fq_codel effectively prioritizes ping, so the difference
>>> will be very easy to see.
>>>
>>> I don't know if there's any corner cases that want testing as well.
>
>
> BQL adjusts the number of packets that can be queued based on their size, so
> you can have far more 64 byte packets queued than you can have 1500 byte
> packets.
>
> do a ping flood of your network with different packet sizes and look at the
> queue lengths that are allowed, the queue length should be much higher with
> small packets.
>
>>> BQL can be disabled at runtime for comparison testing:
>>> http://lists.openwall.net/netdev/2011/12/01/112
>>>
>>> There's a BQL tool to see it working graphically (using readouts from the
>>> same sysfs directory):
>>> https://github.com/ffainelli/bqlmon
>>>
>>> My Kirkwood setup at home is weak, I basically never reach full link
>>> speed. So this might be somewhat academic unless you set the link speed to
>>> 100 or 10 using the ethtool command. (It seems like a good idea to test
>>> those speeds even if you can do better though). You probably also want to
>>> start with offloads (tso, gso, gro) disabled using ethtool, because they
>>> aggregate packets.
>>>
>>
>> a quick test with a 100M setting, connected to gigabit switch, and flent
>> tcp_download, shows ping under load increases to about 8ms. Conclusion: the
>> Debian kirkwood kernel probably isn't doing BQL for me :).
Wrong way I think. Try tcp_upload.
>
> 8ms of latency under load is doing very well. what are you expecting?
>
> David Lang
>
>
>>> Flent can do this test and generate pretty graphs, including a time
>>> series (plot type "all_scaled") and frequency distribution for the ping
>>> ("ping_cdf"). Flent is a frontend to the netperf network performance
>>> tester. You could use a directly connected laptop and run your own netperf
>>> server (netserver command). You'll need to set up static IPs on both ends
>>> for the duration... if headless then make sure you have an alternative
>>> console access :).
>>>
>>> The normal Flent test is RRUL, which is two-way. tcp_2up would be
>>> better, to avoid testing both end's BQL at the same time. If you want to
>>> run tcp_2up the other way round, so you only need netserver on the ARM, try
>>> using '--swap-up-down'.
>>>
>>> Alan
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast
More information about the Bloat
mailing list