From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yk0-x22d.google.com (mail-yk0-x22d.google.com [IPv6:2607:f8b0:4002:c07::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id AB20E3B2C6 for ; Fri, 12 Feb 2016 16:09:08 -0500 (EST) Received: by mail-yk0-x22d.google.com with SMTP id z13so39381908ykd.0 for ; Fri, 12 Feb 2016 13:09:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=99maexH7mxQLTDZm2ul0yHM3zhNJLkdWL/a5OiUkXAA=; b=0xfr5RvUPwxPmVEJYheDiaPYedTxX+7hWXcZHRxGdmXyVxh+7Z6JxS/1i4CQwN90tI jydkmwLJJDTxlOTJB0YBDxJmJApQWOYt6E50MvfF/Vz3xCuRfzC8TIDLdGeYAQi8KwEz X+OGCLfPRJ/QizWt0e5q/XK+0Y90w3TIGG4DOLnBK/rVvilekYaQEGyhrKTPyk4PcrNW PmpJLPB23+nA+/IYmrcBTQSUxbeag2sjTYSt5fVB+prTtvZnpV27KDO/txU3wT0xmPcF JtljHZt7zXknQsFlstENqygtUMyuvgWpEqUV3ApyHxB4Jiffn9rTkW2Qs7b/Vj1IFBsq 7B1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=99maexH7mxQLTDZm2ul0yHM3zhNJLkdWL/a5OiUkXAA=; b=EnZN+42LjbhWQHl9h2MOq0iipiedOFafUd975HTbgExfoijznNA9P26U1rm6MUmuzr v/eENvii+GS1i0j4dHmiq9E8ds+uw3xxP58QkVX6hF5oqZg5mknZMRlMFhbcQ3kIsuTF 46JQfq6cSVJX2Y0fSNY9p/6ai4V/oUwlXSWJfYfU0far4xkY5YFOYbM4kzD6eakMvfCj xKiyinkYMu8sZI5uoyT96MoFvKkv/bzuw3qaurCzsmpTVr03GT8ZnpqnsYamH86M8C4F x93sd+DBURkaLH3ixoNSQlfvEBCJ9pGo/lDdx/PzDl51llqvk+yzFbtoMp920xMzkYZa A3/w== X-Gm-Message-State: AG10YOSDWKGk2ead3XfuzDD3zh4npfdUruxU+S4jTd+7nVpYdZ/i/VjZ2Vq4BnX8BKzTrQXQY8ok6u+gYlHR+A== MIME-Version: 1.0 X-Received: by 10.37.103.138 with SMTP id b132mr2453958ybc.82.1455311348232; Fri, 12 Feb 2016 13:09:08 -0800 (PST) Received: by 10.13.224.135 with HTTP; Fri, 12 Feb 2016 13:09:08 -0800 (PST) In-Reply-To: References: <56BCDB72.1000702@taht.net> Date: Fri, 12 Feb 2016 15:09:08 -0600 Message-ID: From: Benjamin Cronce To: Mikael Abrahamsson Cc: =?UTF-8?Q?Dave_T=C3=A4ht?= , bloat Content-Type: multipart/alternative; boundary=001a1142f7aef2fbef052b9913a7 Subject: Re: [Bloat] new "vector packet processing" based effort to speed up networking X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Feb 2016 21:09:08 -0000 --001a1142f7aef2fbef052b9913a7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Modern CPUs could push a lot of PPS, but they can't with current network stacks. Linux or FreeBSD on a modern 3.5ghz octal core Xeon can't push enough 64 byte packets to saturate a 100Mb link. PFSense 3.0 was looking to use dpdk to do line rate 40Gb, but they are also looking at alternatives like netmap. PFSense 3.0 is also aiming to do line rate 10Gb+ and eventually 40Gb VPN/IPSec, which dpdk would make viable. There's also talk about potentially scaling line rate all the way into the 80Gb range. That's full stateful firewalling and NAT. I just hope someone can fix the network stacks so they can actually handle a 10Mb/s DDOS attacks. There is no reason 10Mb of traffic should take down a modern firewall. Turns out to be around 1 million clock cycles per packet. What the heck is the network stack doing to spend 1mil cycles trying to handle a packet? /rant On Fri, Feb 12, 2016 at 2:40 AM, Mikael Abrahamsson wrote: > On Thu, 11 Feb 2016, Dave T=C3=A4ht wrote: > > Someone asked me recently what I thought of the dpdk. I said: >> "It's a great way to heat datacenters". Still, there's momentum it >> seems, to move more stuff into userspace. >> > > Especially now that Intel CPUs seem to be able to push a lot of PPS > compared to what they could before. A lot more. > > What one has to take into account is that this tech is most likely going > to be deployed on servers with 10GE NICs or even 25/40/100GE, and they ar= e > most likely going to be connected to a small buffer datacenter switch whi= ch > will do FIFO on extremely small shared buffer memory (we're talking small > fractions of a millisecond of buffer at 10GE speed), and usually lots of > these servers will be behind oversubscribed interconnect links between > switches. > > A completely different use case would of course be if someone started to > create midrange enterprise routers with 1GE/10GE ports using this > technology, then it would of course make a lot of sense to have proper AQ= M. > I have no idea what kind of performance one can expect out of a low power > Intel CPU that might fit into one of these... > > -- > Mikael Abrahamsson email: swmike@swm.pp.se > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > > --001a1142f7aef2fbef052b9913a7 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Modern CPUs could push a lot of PPS, but they can't wi= th current network stacks. Linux or FreeBSD on a modern 3.5ghz octal core X= eon can't push enough 64 byte packets to saturate a 100Mb link. PFSense= 3.0 was looking to use=C2=A0dpdk to do li= ne rate 40Gb, but they are also looking at alternatives like netmap. PFSens= e 3.0 is also aiming to do line rate 10Gb+ and eventually 40Gb VPN/IPSec, w= hich=C2=A0dpdk would make viable. T= here's also talk about potentially scaling line rate all the way into t= he 80Gb range. That's full stateful firewalling and NAT.

I just hope someone can fix the network stacks so they can actual= ly handle a 10Mb/s DDOS attacks. There is no reason 10Mb of traffic should = take down a modern firewall. Turns out to be around 1 million clock cycles = per packet. What the heck is the network stack doing to spend 1mil cycles t= rying to handle a packet? /rant

On Fri, Feb 12, 2016 at 2:40 AM, Mikael Ab= rahamsson <swmike@swm.pp.se> wrote:
On Thu, 11 Feb 2016, Dave T=C3=A4ht wrote:

Someone asked me recently what I thought of the dpdk. I said:
"It's a great way to heat datacenters". Still, there's mo= mentum it
seems, to move more stuff into userspace.

Especially now that Intel CPUs seem to be able to push a lot of PPS compare= d to what they could before. A lot more.

What one has to take into account is that this tech is most likely going to= be deployed on servers with 10GE NICs or even 25/40/100GE, and they are mo= st likely going to be connected to a small buffer datacenter switch which w= ill do FIFO on extremely small shared buffer memory (we're talking smal= l fractions of a millisecond of buffer at 10GE speed), and usually lots of = these servers will be behind oversubscribed interconnect links between swit= ches.

A completely different use case would of course be if someone started to cr= eate midrange enterprise routers with 1GE/10GE ports using this technology,= then it would of course make a lot of sense to have proper AQM. I have no = idea what kind of performance one can expect out of a low power Intel CPU t= hat might fit into one of these...

--
Mikael Abrahamsson=C2=A0 =C2=A0 email: swmike@swm.pp.se

___________________= ____________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
https://lists.bufferbloat.net/listinfo/bloat


--001a1142f7aef2fbef052b9913a7--