From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-x22f.google.com (mail-ob0-x22f.google.com [IPv6:2607:f8b0:4003:c01::22f]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id E7D5021F202 for ; Sat, 29 Nov 2014 12:13:21 -0800 (PST) Received: by mail-ob0-f175.google.com with SMTP id wp4so6351866obc.34 for ; Sat, 29 Nov 2014 12:13:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=HHYbukCRlE22rLXHGSi2Fh2FaF4auo/9d34UKuHXEa0=; b=WFlTtDe+IKbcICrvhQSpLh5l9UVL6uQLcnoFVOIeH2TLXZxToR4OSnNap9TsFNQfLx i5Dr8EzMSYeEL4x3PTblvqWZEncMCeCvelvfOoAt580z4liMrSuvadSjkaXVQHpuN2HR erYCOx8c0eKkTseWfEo5l2Tg+WQWFI/8JrF520ZKayrbfU63bv5oS/0N1a+NojeIa3Xc kNf4ZNQ04z8PbvBWfrPah7qrJLl79vPq/RmOoq1jj8TuPzlejtb5EHC3ZcfemGZcE5kq Z7PDQ1qnJmNG4PHtK2OaC2R3S3tXF8e8uwClb+XW9ZoNKMOjy+eOt/99uW3u/Ch0/zXW LNDQ== MIME-Version: 1.0 X-Received: by 10.60.173.240 with SMTP id bn16mr32002027oec.41.1417292000088; Sat, 29 Nov 2014 12:13:20 -0800 (PST) Received: by 10.202.227.211 with HTTP; Sat, 29 Nov 2014 12:13:19 -0800 (PST) Date: Sat, 29 Nov 2014 12:13:19 -0800 Message-ID: From: Dave Taht To: "cerowrt-devel@lists.bufferbloat.net" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: [Cerowrt-devel] sqm: policing inbound instead, at higher rates X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Nov 2014 20:13:50 -0000 I had discarded conventional policing early on as it was very hard to find a good setting for the burst parameter at lower rates, in particular, and also because all the examples on the internet were broken for ipv6. That said, once you have higher rates inbound (like 50+mbits) using htb + fq_codel has been running out of cpu for us, and something lighter weight has seemed needed. so the following script does all traffic correctly using a policer. #!/bin/sh RATE=3D50mbit #obviously set this for your rate IFACE=3Dge00 # obviously set this for your interface tc qdisc del dev $IFACE ingress tc qdisc add dev $IFACE handle ffff: ingress tc filter add dev $IFACE parent ffff: protocol all prio 999 u32 match ip protocol 0 0x00 police rate $RATE burst 1000k drop flowid :1 Compared to sqm it drops a LOT more packets: policing: qdisc ingress ffff: parent ffff:fff1 ---------------- Sent 805039891 bytes 540815 pkt (dropped 1743, overlimits 0 requeues 0) backlog 0b 0p requeues 0 vs htb + fq_codel: root@lorna-gw:~# tc -s qdisc show dev ifb4ge00 qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0 direct_qlen 32 Sent 829104461 bytes 551557 pkt (dropped 0, overlimits 1075374 requeues 0) backlog 0b 0p requeues 0 qdisc fq_codel 110: parent 1:10 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn Sent 829104461 bytes 551557 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 1514 drop_overlimit 0 new_flow_count 6222 ecn_mark 155 new_flows_len 1 old_flows_len 1 (*0*, with 155 ecn marks) but this script achieves about the same results bandwidth and latency-wise on the rrul test as not, and it is certainly possible to write a smarter, gentler policer along codel principles, adding support for marking in addition to dropping, and being less of a brick wall, in general. The *huge* win, is at 50mbit down, this has 46% of cpu left over on a cerowrt box verses about 11% for htb + fq_codel on the rrul test. http://snapon.lab.bufferbloat.net/~d/lorna_comcast/policervssqm.png I am not in a position to try higher rates today but if those of you running at 60mbit+ would give this a try (basically, run sqm, do a test, then run this script, do another test) I think this might get us well past 100mbit on inbound without much overall harm. (but do test with normal use a while, particularly at longer RTTs. And still I don't know a good guideline for testing and setting the burst rate (anyone?), but smarter policing seems to be a good start to alleviating the worst effects of bufferbloat on ingress. --=20 Dave T=C3=A4ht http://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks