[Codel] OpenWRT wrong adjustment of fq_codel defaults (Was: fq_codel_drop vs a udp flood)

Jesper Dangaard Brouer brouer at redhat.com
Fri May 6 05:42:43 EDT 2016


Hi Felix,

This is an important fix for OpenWRT, please read!

OpenWRT changed the default fq_codel sch->limit from 10240 to 1024,
without also adjusting q->flows_cnt.  Eric explains below that you must
also adjust the buckets (q->flows_cnt) for this not to break. (Just
adjust it to 128)

Problematic OpenWRT commit in question:
 http://git.openwrt.org/?p=openwrt.git;a=patch;h=12cd6578084e
 12cd6578084e ("kernel: revert fq_codel quantum override to prevent it from causing too much cpu load with higher speed (#21326)")


I also highly recommend you cherry-pick this very recent commit:
 net-next: 9d18562a2278 ("fq_codel: add batch ability to fq_codel_drop()")
 https://git.kernel.org/davem/net-next/c/9d18562a227

This should fix very high CPU usage in-case fq_codel goes into drop mode.
The problem is that drop mode was considered rare, and implementation
wise it was chosen to be more expensive (to save cycles on normal mode).
Unfortunately is it easy to trigger with an UDP flood. Drop mode is
especially expensive for smaller devices, as it scans a 4K big array,
thus 64 cache misses for small devices!

The fix is to allow drop-mode to bulk-drop more packets when entering
drop-mode (default 64 bulk drop).  That way we don't suddenly
experience a significantly higher processing cost per packet, but
instead can amortize this.

To Eric, should we recommend OpenWRT to adjust default (max) 64 bulk
drop, given we also recommend bucket size to be 128 ? (thus the amount
of memory to scan is less, but their CPU is also much smaller).

--Jesper


On Thu, 05 May 2016 12:23:27 -0700 Eric Dumazet <eric.dumazet at gmail.com> wrote:

> On Thu, 2016-05-05 at 19:25 +0300, Roman Yeryomin wrote:
> > On 5 May 2016 at 19:12, Eric Dumazet <eric.dumazet at gmail.com> wrote:  
> > > On Thu, 2016-05-05 at 17:53 +0300, Roman Yeryomin wrote:
> > >  
> > >>
> > >> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
> > >> quantum 1514 target 5.0ms interval 100.0ms ecn
> > >>  Sent 12306 bytes 128 pkt (dropped 0, overlimits 0 requeues 0)
> > >>  backlog 0b 0p requeues 0
> > >>   maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
> > >>   new_flows_len 0 old_flows_len 0  
> > >
> > >
> > > Limit of 1024 packets and 1024 flows is not wise I think.
> > >
> > > (If all buckets are in use, each bucket has a virtual queue of 1 packet,
> > > which is almost the same than having no queue at all)
> > >
> > > I suggest to have at least 8 packets per bucket, to let Codel have a
> > > chance to trigger.
> > >
> > > So you could either reduce number of buckets to 128 (if memory is
> > > tight), or increase limit to 8192.  
> > 
> > Will try, but what I've posted is default, I didn't change/configure that.  
> 
> fq_codel has a default of 10240 packets and 1024 buckets.
> 
> http://lxr.free-electrons.com/source/net/sched/sch_fq_codel.c#L413
> 
> If someone changed that in the linux variant you use, he probably should
> explain the rationale.


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer


More information about the Codel mailing list