From: Benjamin Cronce <bcronce@gmail.com>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Cc: "Dave Täht" <dave@taht.net>, bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] new "vector packet processing" based effort to speed up networking
Date: Fri, 12 Feb 2016 15:09:08 -0600 [thread overview]
Message-ID: <CAJ_ENFFNXj3_ks9LC=9H4H9GFyZR0nb5G+EgcjP69_DC4jAPQQ@mail.gmail.com> (raw)
In-Reply-To: <alpine.DEB.2.02.1602120933550.11524@uplift.swm.pp.se>
[-- Attachment #1: Type: text/plain, Size: 2367 bytes --]
Modern CPUs could push a lot of PPS, but they can't with current network
stacks. Linux or FreeBSD on a modern 3.5ghz octal core Xeon can't push
enough 64 byte packets to saturate a 100Mb link. PFSense 3.0 was looking to
use dpdk to do line rate 40Gb, but they are also looking at alternatives
like netmap. PFSense 3.0 is also aiming to do line rate 10Gb+ and
eventually 40Gb VPN/IPSec, which dpdk would make viable. There's also talk
about potentially scaling line rate all the way into the 80Gb range. That's
full stateful firewalling and NAT.
I just hope someone can fix the network stacks so they can actually handle
a 10Mb/s DDOS attacks. There is no reason 10Mb of traffic should take down
a modern firewall. Turns out to be around 1 million clock cycles per
packet. What the heck is the network stack doing to spend 1mil cycles
trying to handle a packet? /rant
On Fri, Feb 12, 2016 at 2:40 AM, Mikael Abrahamsson <swmike@swm.pp.se>
wrote:
> On Thu, 11 Feb 2016, Dave Täht wrote:
>
> Someone asked me recently what I thought of the dpdk. I said:
>> "It's a great way to heat datacenters". Still, there's momentum it
>> seems, to move more stuff into userspace.
>>
>
> Especially now that Intel CPUs seem to be able to push a lot of PPS
> compared to what they could before. A lot more.
>
> What one has to take into account is that this tech is most likely going
> to be deployed on servers with 10GE NICs or even 25/40/100GE, and they are
> most likely going to be connected to a small buffer datacenter switch which
> will do FIFO on extremely small shared buffer memory (we're talking small
> fractions of a millisecond of buffer at 10GE speed), and usually lots of
> these servers will be behind oversubscribed interconnect links between
> switches.
>
> A completely different use case would of course be if someone started to
> create midrange enterprise routers with 1GE/10GE ports using this
> technology, then it would of course make a lot of sense to have proper AQM.
> I have no idea what kind of performance one can expect out of a low power
> Intel CPU that might fit into one of these...
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 3250 bytes --]
prev parent reply other threads:[~2016-02-12 21:09 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-11 19:05 Dave Täht
2016-02-12 8:40 ` Mikael Abrahamsson
2016-02-12 21:09 ` Benjamin Cronce [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJ_ENFFNXj3_ks9LC=9H4H9GFyZR0nb5G+EgcjP69_DC4jAPQQ@mail.gmail.com' \
--to=bcronce@gmail.com \
--cc=bloat@lists.bufferbloat.net \
--cc=dave@taht.net \
--cc=swmike@swm.pp.se \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox