General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: dag dg <dagofthedofg@gmail.com>
To: bloat@lists.bufferbloat.net
Subject: [Bloat] Need Guidance on reducing bufferbloat on a Fedora-based router
Date: Sat, 20 Jan 2018 11:51:11 -0600	[thread overview]
Message-ID: <CAO2Qn5nxMRC_MkO6SY+NwhX2Wq_P8S=Zr_ay+-HLZdn91a8JBw@mail.gmail.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 6162 bytes --]

Hey folks. I'm new to this list but I've been following bufferbloat.net's
initiatives for awhile. Some time ago I built a Fedora-based router using
desktop hardware:

AMD FX-8120 3.1GHz
16 GB RAM
Intel i350-t2v2 dual port NIC

and I've pretty much been fighting bufferbloat since I built it. About a
year ago I bumped into the sqm-scripts initiative and was able to get it
set up on Fedora and began to get much better bufferbloat results.

Recently with the Meltdown/Spectre incidents I've been doing some
diagnostics on said box and noticed that under the sqm-scripts config the
number of available queues on my uplink interface is reduced due to my
using the "simple" QoS script they provide:

[root@router ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc htb 1: dev enp2s0f0 root refcnt 9 r2q 10 default 12
direct_packets_stat 3 direct_qlen 1000
qdisc fq_codel 120: dev enp2s0f0 parent 1:12 limit 1001p flows 1024 quantum
300 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 130: dev enp2s0f0 parent 1:13 limit 1001p flows 1024 quantum
300 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 110: dev enp2s0f0 parent 1:11 limit 1001p flows 1024 quantum
300 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc ingress ffff: dev enp2s0f0 parent ffff:fff1 ----------------
qdisc mq 0: dev enp2s0f1 root
qdisc fq_codel 0: dev enp2s0f1 parent :8 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :7 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :6 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :5 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :4 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :3 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :2 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :1 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1024 quantum
1500 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc htb 1: dev ifb4enp2s0f0 root refcnt 2 r2q 10 default 10
direct_packets_stat 0 direct_qlen 32
qdisc fq_codel 110: dev ifb4enp2s0f0 parent 1:10 limit 1001p flows 1024
quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn

When I turn the sqm-scripts off:

[root@router ~]# tc qdisc show
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev enp2s0f0 root
qdisc fq_codel 0: dev enp2s0f0 parent :8 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :7 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :6 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :5 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :4 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :3 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :2 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f0 parent :1 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc mq 0: dev enp2s0f1 root
qdisc fq_codel 0: dev enp2s0f1 parent :8 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :7 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :6 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :5 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :4 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :3 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :2 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev enp2s0f1 parent :1 limit 10240p flows 1024 quantum
1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
qdisc fq_codel 0: dev tun0 root refcnt 2 limit 10240p flows 1024 quantum
1500 target 5.0ms interval 100.0ms memory_limit 32Mb ecn

The i350 series NIC I'm using supports up to 8 Tx and Rx queues depending
on the number of cores the CPU has.

I've read up on the developments with cake however just as fq_codel took
awhile to move over to fedora so is cake taking awhile to become available.
I could compile cake from source but I'm a little nervous in gutting the
distribution's iproute2 in order to add cake support.

This hardware is super overkill for a home connection but I like to run a
lot of network diagnostic tools to monitor the health of the network that
just cripple pretty much any standard home routing hardware I use. As a
side note I also realize that being a bulldozer architecture the 8120 is
technically a 4 module chip; this was just the hardware I had available at
the time and I'm planning on moving over to a 10 core xeon chip(no
hyperthreading) in the future so there'd be 2 cores for the OS and 8 for
the NIC.

At this point I'm just looking for some guidance on steps to move forward,
any suggestions would be appreciated.

~dag

[-- Attachment #2: Type: text/html, Size: 8700 bytes --]

             reply	other threads:[~2018-01-20 17:51 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-20 17:51 dag dg [this message]
2018-01-20 17:55 ` Toke Høiland-Jørgensen
     [not found]   ` <CAO2Qn5moEDwf1tuadG5vtE04LCvGms3UgmNLHSkQ+2Au5fXFiQ@mail.gmail.com>
2018-01-20 19:22     ` Toke Høiland-Jørgensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAO2Qn5nxMRC_MkO6SY+NwhX2Wq_P8S=Zr_ay+-HLZdn91a8JBw@mail.gmail.com' \
    --to=dagofthedofg@gmail.com \
    --cc=bloat@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox