* [Bloat] new "vector packet processing" based effort to speed up networking
@ 2016-02-11 19:05 Dave Täht
2016-02-12 8:40 ` Mikael Abrahamsson
0 siblings, 1 reply; 3+ messages in thread
From: Dave Täht @ 2016-02-11 19:05 UTC (permalink / raw)
To: bloat
Someone asked me recently what I thought of the dpdk. I said:
"It's a great way to heat datacenters". Still, there's momentum it
seems, to move more stuff into userspace.
http://www.linuxfoundation.org/news-media/announcements/2016/02/linux-foundation-forms-open-source-effort-advance-io-services
Last I looked dpdk had a RED implementation, various folk had made
noises about trying for codel or pie in it, anyone made any progress?
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Bloat] new "vector packet processing" based effort to speed up networking
2016-02-11 19:05 [Bloat] new "vector packet processing" based effort to speed up networking Dave Täht
@ 2016-02-12 8:40 ` Mikael Abrahamsson
2016-02-12 21:09 ` Benjamin Cronce
0 siblings, 1 reply; 3+ messages in thread
From: Mikael Abrahamsson @ 2016-02-12 8:40 UTC (permalink / raw)
To: Dave Täht; +Cc: bloat
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1184 bytes --]
On Thu, 11 Feb 2016, Dave Täht wrote:
> Someone asked me recently what I thought of the dpdk. I said:
> "It's a great way to heat datacenters". Still, there's momentum it
> seems, to move more stuff into userspace.
Especially now that Intel CPUs seem to be able to push a lot of PPS
compared to what they could before. A lot more.
What one has to take into account is that this tech is most likely going
to be deployed on servers with 10GE NICs or even 25/40/100GE, and they are
most likely going to be connected to a small buffer datacenter switch
which will do FIFO on extremely small shared buffer memory (we're talking
small fractions of a millisecond of buffer at 10GE speed), and usually
lots of these servers will be behind oversubscribed interconnect links
between switches.
A completely different use case would of course be if someone started to
create midrange enterprise routers with 1GE/10GE ports using this
technology, then it would of course make a lot of sense to have proper
AQM. I have no idea what kind of performance one can expect out of a low
power Intel CPU that might fit into one of these...
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Bloat] new "vector packet processing" based effort to speed up networking
2016-02-12 8:40 ` Mikael Abrahamsson
@ 2016-02-12 21:09 ` Benjamin Cronce
0 siblings, 0 replies; 3+ messages in thread
From: Benjamin Cronce @ 2016-02-12 21:09 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Dave Täht, bloat
[-- Attachment #1: Type: text/plain, Size: 2367 bytes --]
Modern CPUs could push a lot of PPS, but they can't with current network
stacks. Linux or FreeBSD on a modern 3.5ghz octal core Xeon can't push
enough 64 byte packets to saturate a 100Mb link. PFSense 3.0 was looking to
use dpdk to do line rate 40Gb, but they are also looking at alternatives
like netmap. PFSense 3.0 is also aiming to do line rate 10Gb+ and
eventually 40Gb VPN/IPSec, which dpdk would make viable. There's also talk
about potentially scaling line rate all the way into the 80Gb range. That's
full stateful firewalling and NAT.
I just hope someone can fix the network stacks so they can actually handle
a 10Mb/s DDOS attacks. There is no reason 10Mb of traffic should take down
a modern firewall. Turns out to be around 1 million clock cycles per
packet. What the heck is the network stack doing to spend 1mil cycles
trying to handle a packet? /rant
On Fri, Feb 12, 2016 at 2:40 AM, Mikael Abrahamsson <swmike@swm.pp.se>
wrote:
> On Thu, 11 Feb 2016, Dave Täht wrote:
>
> Someone asked me recently what I thought of the dpdk. I said:
>> "It's a great way to heat datacenters". Still, there's momentum it
>> seems, to move more stuff into userspace.
>>
>
> Especially now that Intel CPUs seem to be able to push a lot of PPS
> compared to what they could before. A lot more.
>
> What one has to take into account is that this tech is most likely going
> to be deployed on servers with 10GE NICs or even 25/40/100GE, and they are
> most likely going to be connected to a small buffer datacenter switch which
> will do FIFO on extremely small shared buffer memory (we're talking small
> fractions of a millisecond of buffer at 10GE speed), and usually lots of
> these servers will be behind oversubscribed interconnect links between
> switches.
>
> A completely different use case would of course be if someone started to
> create midrange enterprise routers with 1GE/10GE ports using this
> technology, then it would of course make a lot of sense to have proper AQM.
> I have no idea what kind of performance one can expect out of a low power
> Intel CPU that might fit into one of these...
>
> --
> Mikael Abrahamsson email: swmike@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 3250 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2016-02-12 21:09 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-11 19:05 [Bloat] new "vector packet processing" based effort to speed up networking Dave Täht
2016-02-12 8:40 ` Mikael Abrahamsson
2016-02-12 21:09 ` Benjamin Cronce
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox