From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-x22e.google.com (mail-ob0-x22e.google.com [IPv6:2607:f8b0:4003:c01::22e]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 35D1A21F2DF; Tue, 12 May 2015 09:15:57 -0700 (PDT) Received: by obbkp3 with SMTP id kp3so9406023obb.3; Tue, 12 May 2015 09:15:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=CWQM/FZnenlMxsa/Nw5l+XLQhS3gJaCZqVDGYpaIRQA=; b=PFwaIL3C66P23y7WthAyfr5hx4CgHxUOML+grxUbJjVpnMttfc73HGEP1cHk3tVsOQ JsGKB0Svt25//gkCH7HfohtASxYjYkwsE2hFD6xTlzgASrLQoz+8AWeHWtPQ9B2gERt1 k6rrhJsr5ay8OfYB/McUCNaiOItMrF7ayHxCQ4Ac2EJyhdalu+YNuVgP6fObtwXWNpFu l5tonh1MRvqLeyH1zG0NViEk99JZ3Xqx2tfbwssbIxSBLuwFn+reX9qJO7inYGA4ZLhM BzEcE/MusUM3huRpgeZBiPD71VPLqE/063mGGEsxwHNZeEUBQ6ToJXRTtJqgJJsC9m7Q QTEQ== MIME-Version: 1.0 X-Received: by 10.202.4.133 with SMTP id 127mr5484216oie.11.1431447329286; Tue, 12 May 2015 09:15:29 -0700 (PDT) Received: by 10.202.71.139 with HTTP; Tue, 12 May 2015 09:15:29 -0700 (PDT) Date: Tue, 12 May 2015 09:15:29 -0700 Message-ID: From: Dave Taht To: "cerowrt-devel@lists.bufferbloat.net" , bloat Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: [Cerowrt-devel] quick survey: actual drop and mark stats from live sqm-scripts + fq_codel'd networks? X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 May 2015 16:16:53 -0000 I am curious as to the drop and mark statistics for those actively using their networks, but NOT obsessively testing dslreports' speedtest as I have been :), over the course of days. A cron job running once an hour would work, but snmp polling with mrtg, parsing the tc output would be better. But a quick survey would be interesting, if you could dump your uptime tc -s qdisc show dev whatever tc -s qdisc show dev your_inbound_ifb_device here? and also check to see if you are getting drops or marks on your core wifi interfaces. For example a great deal of the dropping behavior for me has moved to wifi over the last year (as we upgraded from 20mbit down to 60 or 110mbit down), particularly on my longer distance links but also on links that have very good same room connectivity. example wifi interface (6 days of traffic) qdisc fq_codel 30: parent 1:3 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn Sent 3409244008 bytes 3400248 pkt (dropped 487, overlimits 0 requeues 2703= ) backlog 0b 0p requeues 2703 maxpacket 1514 drop_overlimit 0 new_flow_count 1637 ecn_mark 0 new_flows_len 0 old_flows_len 0 this is the same routers external interface (60mbit downlink) (yes, I deployed ecn on every box I could) qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 1500 target 5.0ms interval 100.0ms ecn Sent 741392066 bytes 8765559 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 maxpacket 1514 drop_overlimit 0 new_flow_count 2815747 ecn_mark 0 new_flows_len 1 old_flows_len 1 qdisc fq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn Sent 362010241205 bytes 268428951 pkt (dropped 28391, overlimits 0 requeue= s 0) backlog 0b 0p requeues 0 maxpacket 1514 drop_overlimit 0 new_flow_count 34382791 ecn_mark 238 new_flows_len 1 old_flows_len 3 tc -s qdisc show dev ge00 (uplink, 10mbit) qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn Sent 19054721473 bytes 141364936 pkt (dropped 2251, overlimits 0 requeues = 0) backlog 0b 0p requeues 0 maxpacket 1514 drop_overlimit 29 new_flow_count 37418891 ecn_mark 15593 new_flows_len 0 old_flows_len 2 --=20 Dave T=C3=A4ht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67