[Cerowrt-devel] sqm: policing inbound instead, at higher rates

Dave Taht dave.taht at gmail.com
Sat Nov 29 15:13:19 EST 2014

I had discarded conventional policing early on as it was very hard to
find a good setting for the burst parameter at lower rates, in
particular, and also because all the examples on the internet were
broken for ipv6.

That said, once you have higher rates inbound (like 50+mbits) using
htb + fq_codel has been running out of cpu for us, and something
lighter weight has seemed needed.

so the following script does all traffic correctly using a policer.


RATE=50mbit #obviously set this for your rate
IFACE=ge00 # obviously set this for your interface

tc qdisc del dev $IFACE ingress
tc qdisc add dev $IFACE handle ffff: ingress
tc filter add dev $IFACE parent ffff: protocol all prio 999 u32 match
ip protocol 0 0x00 police rate $RATE burst 1000k drop flowid :1

Compared to sqm it drops a LOT more packets:


qdisc ingress ffff: parent ffff:fff1 ----------------
 Sent 805039891 bytes 540815 pkt (dropped 1743, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0

vs htb + fq_codel:

root at lorna-gw:~# tc -s qdisc show dev ifb4ge00
qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0
direct_qlen 32
 Sent 829104461 bytes 551557 pkt (dropped 0, overlimits 1075374 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 110: parent 1:10 limit 1001p flows 1024 quantum 300
target 5.0ms interval 100.0ms ecn
 Sent 829104461 bytes 551557 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1514 drop_overlimit 0 new_flow_count 6222 ecn_mark 155
  new_flows_len 1 old_flows_len 1

(*0*, with 155 ecn marks)

but this script achieves about the same results bandwidth and
latency-wise  on the rrul test as not, and it is certainly possible to
write a smarter, gentler policer along codel principles, adding
support for marking in addition to dropping, and being less of a brick
wall, in general.

The *huge* win, is at 50mbit down, this has 46% of cpu left over on a
cerowrt box verses about 11% for htb + fq_codel on the rrul test.


I am not in a position to try higher rates today but if those of you
running at 60mbit+ would give this a try (basically, run sqm, do a
test, then run this script,
do another test) I think this might get us well past 100mbit on
inbound without much overall harm. (but do test with normal use a
while, particularly at longer RTTs.

And still I don't know a good guideline for testing and setting the
burst rate (anyone?), but smarter policing seems to be a good start to
alleviating the worst effects of bufferbloat on ingress.

Dave Täht


More information about the Cerowrt-devel mailing list