Hope this helps.

# uptime
 00:16:17 up 4 days, 10 min,  load average: 0.31, 0.32, 0.26

# tc -s qdisc show dev ge00
qdisc htb 1: root refcnt 2 r2q 10 default 12 direct_packets_stat 0 direct_qlen 1000
 Sent 1480380789 bytes 5957584 pkt (dropped 0, overlimits 2385541 requeues 0)
 backlog 0b 8p requeues 0
qdisc nfq_codel 110: parent 1:11 limit 1001p flows 1024 quantum 300 target 26.7ms interval 121.7ms
 Sent 408736 bytes 2606 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 2509 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc nfq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300 target 26.7ms interval 121.7ms
 Sent 1476234652 bytes 5931950 pkt (dropped 69533, overlimits 0 requeues 0)
 backlog 1696b 8p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 656822 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc nfq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300 target 26.7ms interval 121.7ms
 Sent 3737401 bytes 23028 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 192 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc ingress ffff: parent ffff:fff1 ----------------
 Sent 8827517071 bytes 8334242 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0

# tc -s qdisc show dev sw00
qdisc mq 1: root
 Sent 23110679686 bytes 161711219 pkt (dropped 742, overlimits 0 requeues 3381)
 backlog 0b 0p requeues 3381
qdisc fq_codel 10: parent 1:1 limit 800p flows 1024 quantum 500 target 10.0ms interval 100.0ms
 Sent 474847 bytes 3287 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 20: parent 1:2 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 1511242254 bytes 1537435 pkt (dropped 0, overlimits 0 requeues 1030)
 backlog 0b 0p requeues 1030
  maxpacket 1514 drop_overlimit 0 new_flow_count 111 ecn_mark 3
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 30: parent 1:3 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 21588284634 bytes 160162796 pkt (dropped 742, overlimits 0 requeues 2351)
 backlog 0b 0p requeues 2351
  maxpacket 1514 drop_overlimit 0 new_flow_count 513 ecn_mark 9
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 40: parent 1:4 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
 Sent 10677951 bytes 7701 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

# tc -s qdisc show dev sw10
qdisc mq 1: root
 Sent 850417587 bytes 1202833 pkt (dropped 0, overlimits 0 requeues 12)
 backlog 0b 0p requeues 12
qdisc fq_codel 10: parent 1:1 limit 800p flows 1024 quantum 500 target 10.0ms interval 100.0ms
 Sent 10416 bytes 72 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 20: parent 1:2 limit 800p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 15304878 bytes 13103 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 30: parent 1:3 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 835102293 bytes 1189658 pkt (dropped 0, overlimits 0 requeues 12)
 backlog 0b 0p requeues 12
  maxpacket 286 drop_overlimit 0 new_flow_count 3 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 40: parent 1:4 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

--
David P.



On Tue, May 12, 2015 at 12:15 PM, Dave Taht <dave.taht@gmail.com> wrote:
I am curious as to the drop and mark statistics for those actively
using their networks,
but NOT obsessively testing dslreports' speedtest as I have been :),
over the course of days.

A cron job running once an hour would work, but snmp polling with
mrtg, parsing the tc output would be better.

But a quick survey would be interesting, if you could dump your

uptime
tc -s qdisc show dev whatever
tc -s qdisc show dev your_inbound_ifb_device

here?

and also check to see if you are getting drops or marks on your core
wifi interfaces. For example a great deal of the dropping behavior for
me has moved to wifi over the last year (as we upgraded from 20mbit
down to 60 or 110mbit down), particularly on my longer distance links
but also on links that have very good same room connectivity.

example wifi interface (6 days of traffic)

qdisc fq_codel 30: parent 1:3 limit 1000p flows 1024 quantum 300
target 5.0ms interval 100.0ms ecn
 Sent 3409244008 bytes 3400248 pkt (dropped 487, overlimits 0 requeues 2703)
 backlog 0b 0p requeues 2703
  maxpacket 1514 drop_overlimit 0 new_flow_count 1637 ecn_mark 0
  new_flows_len 0 old_flows_len 0


this is the same routers external interface (60mbit downlink) (yes, I
deployed ecn on every box I could)


qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 1500
target 5.0ms interval 100.0ms ecn

 Sent 741392066 bytes 8765559 pkt (dropped 0, overlimits 0 requeues 0)

 backlog 0b 0p requeues 0

  maxpacket 1514 drop_overlimit 0 new_flow_count 2815747 ecn_mark 0

  new_flows_len 1 old_flows_len 1

qdisc fq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300
target 5.0ms interval 100.0ms ecn

 Sent 362010241205 bytes 268428951 pkt (dropped 28391, overlimits 0 requeues 0)

 backlog 0b 0p requeues 0

  maxpacket 1514 drop_overlimit 0 new_flow_count 34382791 ecn_mark 238

  new_flows_len 1 old_flows_len 3

tc -s qdisc show dev ge00 (uplink, 10mbit)

qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300
target 5.0ms interval 100.0ms ecn

 Sent 19054721473 bytes 141364936 pkt (dropped 2251, overlimits 0 requeues 0)

 backlog 0b 0p requeues 0

  maxpacket 1514 drop_overlimit 29 new_flow_count 37418891 ecn_mark 15593

  new_flows_len 0 old_flows_len 2




--
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel