[Codel] fq_codel_drop vs a udp flood

Dave Taht dave.taht at gmail.com
Thu May 5 14:16:31 EDT 2016


On Thu, May 5, 2016 at 10:39 AM, Roman Yeryomin <leroi.lists at gmail.com> wrote:
> On 5 May 2016 at 19:59, Jonathan Morton <chromatix99 at gmail.com> wrote:
>>> Having same (low) speeds.
>>> So it didn't help at all :(
>>
>> Although the new “emergency drop” code is now dropping batches of consecutive packets, Codel is also still dropping individual packets in between these batches, probably at a high rate.  Since all fragments of an original packet are required to reassemble it, but Codel doesn’t link related fragments when deciding to drop, each fragment lost in this way reduces throughput efficiency.  Only a fraction of the original packets can be reassembled correctly, but the surviving (yet useless) fragments still occupy link capacity.
>>
>> This phenomenon is not Codel specific; I would also expect to see it on most other AQMs, and definitely on RED variants, including PIE.  Fortunately for real traffic, it normally arises only on artificial traffic such as iperf runs with large UDP packets.  Unfortunately for AQM advocates, iperf uses large UDP packets by default, and it is very easy to misinterpret the results unfavourably for AQM (as opposed to unfavourably for iperf).
>>
>> If you re-run the test with iperf set to a packet size compatible with the path MTU, you should see much better throughput numbers due to the elimination of fragmented packets.  A UDP payload size of 1280 bytes is a safe, conservative figure for a normal MTU in the vicinity of 1500.
>
> Setting packet size to 1280 (-l1280) instead of 1472, I got even lower
> speed (18-20Mbps).
> Other ideas?

How about:

completely dropping your hand-picked patch set and joining us on michal's tree?

https://github.com/kazikcz/linux/commits/fqmac-v4%2Bdqlrfc%2Bcpuregrfix

He just put a commit in there on top of that patchset that might point
at problem you're seeing, in particular, and the code moves all of
fq_codel into the mac80211 layer where it can be scheduled better.

I'm still working off the prior patch set, finding bugs in 802.11e
(that for all I know pre-exist):

http://blog.cerowrt.org/post/cs5_lockout/

(I would love it if people had more insight into the VI queue)

and I wrote up the first tests of the prior (fqmac 3.5) patch here:

http://blog.cerowrt.org/post/ath10_ath9k_1/

with pretty pictures, and a circle and arrow on the back of each one
to be used as evidence against us.

I did just get another 3x3 card to play with, but I'd like to finish
up comprehensively evaluating what I got against mainline first, at
the bandwidths (ath9k to ath10k) I can currently achieve and that will
take til monday, at least. Since your hardware is weaker than mine
(single core?) it would be good to amble along in parallel.

>>> Limit of 1024 packets and 1024 flows is not wise I think.
>>>
>>> (If all buckets are in use, each bucket has a virtual queue of 1 packet,
>>> which is almost the same than having no queue at all)
>>
>> This, while theoretically important in extreme cases with very large numbers of flows, is not relevant to the specific test in question.
>>
>>  - Jonathan Morton
>>



-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org


More information about the Codel mailing list