From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-x22b.google.com (mail-qt0-x22b.google.com [IPv6:2607:f8b0:400d:c0d::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 6FD753CB3B for ; Sat, 20 Jan 2018 12:51:32 -0500 (EST) Received: by mail-qt0-x22b.google.com with SMTP id g14so2271037qti.2 for ; Sat, 20 Jan 2018 09:51:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=JXJqQpaJnyXOKD7DxdwTGGHhRJu+x+iHCgqykXiXfYk=; b=aU0jGqmFXqboqA3HhT8Nl4J9RK8fz7nAhxFNQCOd8oN+xsgYhaYFRKQY+KZ2aJg+H+ /J31SRxUzdDJdv3upBPRSrJinABpacP8EY7oRGfh31BXLnvHgDr5z8ZQTTM/cXNTohLd kybui+I6jiiC8HBPq5FCbkibdAcy1J7OxKdZHpz9woAcmeILoV8slxubTwj39AseKqu8 2QqahvFl4IDAvF7UMkeAeAZnuGKlgLYe2QJfvDNChFqWpwZMdSANeceQ70ONItsb/AVL D+Rf4vsMkLtAv9jjctgS248vTCs+5uhYcd2v8AWTlMBEzdI5zIBvMytfCcDqe+X1drcF iPjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=JXJqQpaJnyXOKD7DxdwTGGHhRJu+x+iHCgqykXiXfYk=; b=duc+nH2n3CspwApuQi/gSng1bouL82AyFZcHUOjyIbCvO/at3aswg9IE5xdOXPjpRS aXxr8rIaGjevVQGsdYXIpAbdl91eOreJ7DeubwS0segIwaT8MKSVSshlS4z3sgSeUlZf pxCz/NXMU0xE1X4HNFVZOlfxCUge9jauu9yARkv1SoqCFMBSBJlMC+hCtZp160KxjHgg 8N9z0UU8D3oeF74nF+p3WRknDbiHBREwJtvQe6uSEShlE6/zTBlf8VRvrDujGA3tFlH7 /vfhFWE/M3UsbgokpgktYqN75hLx57a2CyZmzkbvxT7aiFSpn99J3R+S1Ik7qaJE6+VD 730w== X-Gm-Message-State: AKwxytdlUh/B5XFtbPgLUvTrDgRiRYyZhHk9/vdRHAcTibBTC9YsD8z7 0DlmXdmwAF/4seoB52kSJpSGxIUaT23HAGJ9Q27m/Q== X-Google-Smtp-Source: AH8x225tS7iJTSYmeuaj8SrEE5ccGHy3wswT6F7N/Np2+dEna9tNLNZYAKJ4msxnRon8TzD+llgXLQe0HNLfAFK+K34= X-Received: by 10.237.34.137 with SMTP id p9mr3305614qtc.194.1516470691754; Sat, 20 Jan 2018 09:51:31 -0800 (PST) MIME-Version: 1.0 Received: by 10.237.32.165 with HTTP; Sat, 20 Jan 2018 09:51:11 -0800 (PST) From: dag dg Date: Sat, 20 Jan 2018 11:51:11 -0600 Message-ID: To: bloat@lists.bufferbloat.net Content-Type: multipart/alternative; boundary="001a11375388e4fd40056338d898" Subject: [Bloat] Need Guidance on reducing bufferbloat on a Fedora-based router X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 20 Jan 2018 17:51:32 -0000 --001a11375388e4fd40056338d898 Content-Type: text/plain; charset="UTF-8" Hey folks. I'm new to this list but I've been following bufferbloat.net's initiatives for awhile. Some time ago I built a Fedora-based router using desktop hardware: AMD FX-8120 3.1GHz 16 GB RAM Intel i350-t2v2 dual port NIC and I've pretty much been fighting bufferbloat since I built it. About a year ago I bumped into the sqm-scripts initiative and was able to get it set up on Fedora and began to get much better bufferbloat results. Recently with the Meltdown/Spectre incidents I've been doing some diagnostics on said box and noticed that under the sqm-scripts config the number of available queues on my uplink interface is reduced due to my using the "simple" QoS script they provide: [root@router ~]# tc qdisc show qdisc noqueue 0: dev lo root refcnt 2 qdisc htb 1: dev enp2s0f0 root refcnt 9 r2q 10 default 12 direct_packets_stat 3 direct_qlen 1000 qdisc fq_codel 120: dev enp2s0f0 parent 1:12 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 130: dev enp2s0f0 parent 1:13 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 110: dev enp2s0f0 parent 1:11 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc ingress ffff: dev enp2s0f0 parent ffff:fff1 ---------------- qdisc mq 0: dev enp2s0f1 root qdisc fq_codel 0: dev enp2s0f1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1024 quantum 1500 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc htb 1: dev ifb4enp2s0f0 root refcnt 2 r2q 10 default 10 direct_packets_stat 0 direct_qlen 32 qdisc fq_codel 110: dev ifb4enp2s0f0 parent 1:10 limit 1001p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn When I turn the sqm-scripts off: [root@router ~]# tc qdisc show qdisc noqueue 0: dev lo root refcnt 2 qdisc mq 0: dev enp2s0f0 root qdisc fq_codel 0: dev enp2s0f0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc mq 0: dev enp2s0f1 root qdisc fq_codel 0: dev enp2s0f1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev enp2s0f1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1024 quantum 1500 target 5.0ms interval 100.0ms memory_limit 32Mb ecn The i350 series NIC I'm using supports up to 8 Tx and Rx queues depending on the number of cores the CPU has. I've read up on the developments with cake however just as fq_codel took awhile to move over to fedora so is cake taking awhile to become available. I could compile cake from source but I'm a little nervous in gutting the distribution's iproute2 in order to add cake support. This hardware is super overkill for a home connection but I like to run a lot of network diagnostic tools to monitor the health of the network that just cripple pretty much any standard home routing hardware I use. As a side note I also realize that being a bulldozer architecture the 8120 is technically a 4 module chip; this was just the hardware I had available at the time and I'm planning on moving over to a 10 core xeon chip(no hyperthreading) in the future so there'd be 2 cores for the OS and 8 for the NIC. At this point I'm just looking for some guidance on steps to move forward, any suggestions would be appreciated. ~dag --001a11375388e4fd40056338d898 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hey folks. I'm new to this list but I've been foll= owing bufferbloat.net's initiati= ves for awhile. Some time ago I built a Fedora-based router using desktop h= ardware:

AMD FX-8120=C2=A03.1GHz
16 GB RAM=
Intel i350-t2v2 dual port NIC

and I'= ;ve pretty much been fighting bufferbloat since I built it. About a year ag= o I bumped into the sqm-scripts initiative and was able to get it set up on= Fedora and began to get much better bufferbloat results.

Recently with the Meltdown/Spectre incidents I've been doing so= me diagnostics on said box and noticed that under the sqm-scripts config th= e number of available queues on my uplink interface is reduced due to my us= ing the "simple" QoS script they provide:

[root@router ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc htb 1: dev enp2s0f0 root re= fcnt 9 r2q 10 default 12 direct_packets_stat 3 direct_qlen 1000
qdisc fq_codel 120: dev enp2s0f0 = parent 1:12 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0m= s memory_limit 32Mb ecn
qdisc fq_codel 110: dev enp2s0f0 paren= t 1:11 limit 1001p flows 1024 quantum 300 target 5.0ms interval 100.0ms mem= ory_limit 32Mb ecn
qdi= sc ingress ffff: dev enp2s0f0 parent ffff:fff1 ----------------
qdisc mq 0: dev enp2s0f1 root
qdisc fq_codel 0: dev enp= 2s0f1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval = 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :7 limit 10240p flows 1024 qu= antum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
=
qdisc fq_codel 0: dev enp2s0f1 par= ent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms m= emory_limit 32Mb ecn
q= disc fq_codel 0: dev enp2s0f1 parent :5 limit 10240p flows 1024 quantum 151= 4 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :4 li= mit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_lim= it 32Mb ecn
qdisc fq_c= odel 0: dev enp2s0f1 parent :3 limit 10240p flows 1024 quantum 1514 target = 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :2 limit 10240= p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb e= cn
qdisc fq_codel 0: d= ev enp2s0f1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms int= erval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1= 024 quantum 1500 target 5.0ms interval 100.0ms memory_limit 32Mb ecn=
qdisc htb 1: dev ifb4enp2s0f= 0 root refcnt 2 r2q 10 default 10 direct_packets_stat 0 direct_qlen 32
qdisc fq_codel 110: dev if= b4enp2s0f0 parent 1:10 limit 1001p flows 1024 quantum 1514 target 5.0ms int= erval 100.0ms memory_limit 32Mb ecn

W= hen I turn the sqm-scripts off:

[root@rou= ter ~]# tc qdisc show
= qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev enp2s0f0 root
qdisc fq_codel 0: dev enp2s0f0 parent :8 limit 10= 240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32M= b ecn
qdisc fq_codel 0= : dev enp2s0f0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms = interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :6 limit 10240p flow= s 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp= 2s0f0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval = 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :4 limit 10240p flows 1024 qu= antum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
=
qdisc fq_codel 0: dev enp2s0f0 par= ent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms m= emory_limit 32Mb ecn
q= disc fq_codel 0: dev enp2s0f0 parent :2 limit 10240p flows 1024 quantum 151= 4 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :1 li= mit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_lim= it 32Mb ecn
qdisc mq 0= : dev enp2s0f1 root
qd= isc fq_codel 0: dev enp2s0f1 parent :8 limit 10240p flows 1024 quantum 1514= target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :7 lim= it 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limi= t 32Mb ecn
qdisc fq_co= del 0: dev enp2s0f1 parent :6 limit 10240p flows 1024 quantum 1514 target 5= .0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :5 limit 10240p= flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ec= n
qdisc fq_codel 0: de= v enp2s0f1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms inte= rval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :3 limit 10240p flows 10= 24 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn<= /div>
qdisc fq_codel 0: dev enp2s0f= 1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.= 0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :1 limit 10240p flows 1024 quantu= m 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev tun0 root refcnt= 2 limit 10240p flows 1024 quantum 1500 target 5.0ms interval 100.0ms memor= y_limit 32Mb ecn

The i350 series NIC = I'm using supports up to 8 Tx and Rx queues depending on the number of = cores the CPU has.

I've read up on the develop= ments with cake however just as fq_codel took awhile to move over to fedora= so is cake taking awhile to become available. I could compile cake from so= urce but I'm a little nervous in gutting the distribution's iproute= 2 in order to add cake support.

This hardware is s= uper overkill for a home connection but I like to run a lot of network diag= nostic tools to monitor the health of the network that just cripple pretty = much any standard home routing hardware I use. As a side note I also realiz= e that being a bulldozer architecture the 8120 is technically a 4 module ch= ip; this was just the hardware I had available at the time and I'm plan= ning on moving over to a 10 core xeon chip(no hyperthreading) in the future= so there'd be 2 cores for the OS and 8 for the NIC.

At this point I'm just looking for some guidance on steps to mov= e forward, any suggestions would be appreciated.

~dag
--001a11375388e4fd40056338d898--