From: Benjamin Cronce <bcronce@gmail.com>
To: Noah Causin <n0manletter@gmail.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Questions About Switch Buffering
Date: Sun, 12 Jun 2016 13:25:17 -0500 [thread overview]
Message-ID: <CAJ_ENFGV0+1gf8nos5NTCr98syMXxgB7Sj4CA0oyx9CtTqR7Bw@mail.gmail.com> (raw)
In-Reply-To: <151299a8-87ec-6a8a-b44b-9f710c31a46f@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2091 bytes --]
Routers and firewalls are common to have AQMs because they mostly deal with
a high to low bandwidth transition from LAN to WAN. Internal networks
rarely have bandwidth issues and congestion only happens when you don't
have enough bandwidth. LANs are relatively easy to increase bandwidth.
Either by binding ports or purchasing 10Gb or 40Gb links. If you have a 1Gb
link from your LAN to your router and it's getting bloat issues, I
recommend purchasing a 10Gb uplink to your switch. AQMs are difficult to do
at line rate, even in hardware, and it will almost always be the case that
a faster port is cheaper and better than implementing an AQM in the switch.
One paper that I read was saying our rate of bandwidth is increasing much
faster than our ability to process packets. Moving data is relatively easy
compared to doing branching logic against the data. The paper went one to
review several 400Gb ports, most of which actually were capable of doing
near line rate 400Gb, but even the best one, once you enabled a simple
strict priority QoS nose-dived down to 150Gb. It's becoming a physics
issue. In order to do complex logic, you need more transistors, and that is
at odds with moving the data faster through the system.
At high link speeds in the future, think 1Tb+, QoS may have to go away
unless we find some sort of photonic processing breakthrough. The good news
is it seems like there's no ceiling on the amount of bandwidth we can push
over fiber.
On Sun, Jun 12, 2016 at 12:44 PM, Noah Causin <n0manletter@gmail.com> wrote:
> I have some questions about switch buffering.
>
> Are there any good switches that have modern AQMs in them like fq_codel?
>
> Also, If a home router has a built-in switch, is the buffering controlled
> by the AQM, or do the switches have their own internal buffering that takes
> precedence?
>
> The scenario I am thinking of is two ports trying to feed data out of a
> single port on a switch.
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 2636 bytes --]
next prev parent reply other threads:[~2016-06-12 18:25 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-12 17:44 Noah Causin
2016-06-12 17:48 ` Steinar H. Gunderson
2016-06-12 18:25 ` Benjamin Cronce [this message]
2016-06-12 21:24 ` Steinar H. Gunderson
2016-06-12 22:01 ` Jesper Louis Andersen
2016-06-13 1:07 ` Benjamin Cronce
2016-06-13 1:50 ` Jonathan Morton
2016-06-13 4:34 ` Dave Taht
2016-06-13 8:23 ` Steinar H. Gunderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJ_ENFGV0+1gf8nos5NTCr98syMXxgB7Sj4CA0oyx9CtTqR7Bw@mail.gmail.com \
--to=bcronce@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=n0manletter@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox