<div dir="ltr">Modern CPUs could push a lot of PPS, but they can't with current network stacks. Linux or FreeBSD on a modern 3.5ghz octal core Xeon can't push enough 64 byte packets to saturate a 100Mb link. PFSense 3.0 was looking to use <span style="font-size:12.8px">dpdk to do line rate 40Gb, but they are also looking at alternatives like netmap. PFSense 3.0 is also aiming to do line rate 10Gb+ and eventually 40Gb VPN/IPSec, which </span><span style="font-size:12.8px">dpdk would make viable. There's also talk about potentially scaling line rate all the way into the 80Gb range. That's full stateful firewalling and NAT.</span><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">I just hope someone can fix the network stacks so they can actually handle a 10Mb/s DDOS attacks. There is no reason 10Mb of traffic should take down a modern firewall. Turns out to be around 1 million clock cycles per packet. What the heck is the network stack doing to spend 1mil cycles trying to handle a packet? /rant</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 12, 2016 at 2:40 AM, Mikael Abrahamsson <span dir="ltr"><<a href="mailto:swmike@swm.pp.se" target="_blank">swmike@swm.pp.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Thu, 11 Feb 2016, Dave Täht wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Someone asked me recently what I thought of the dpdk. I said:<br>
"It's a great way to heat datacenters". Still, there's momentum it<br>
seems, to move more stuff into userspace.<br>
</blockquote>
<br></span>
Especially now that Intel CPUs seem to be able to push a lot of PPS compared to what they could before. A lot more.<br>
<br>
What one has to take into account is that this tech is most likely going to be deployed on servers with 10GE NICs or even 25/40/100GE, and they are most likely going to be connected to a small buffer datacenter switch which will do FIFO on extremely small shared buffer memory (we're talking small fractions of a millisecond of buffer at 10GE speed), and usually lots of these servers will be behind oversubscribed interconnect links between switches.<br>
<br>
A completely different use case would of course be if someone started to create midrange enterprise routers with 1GE/10GE ports using this technology, then it would of course make a lot of sense to have proper AQM. I have no idea what kind of performance one can expect out of a low power Intel CPU that might fit into one of these...<span class="HOEnZb"><font color="#888888"><br>
<br>
-- <br>
Mikael Abrahamsson email: <a href="mailto:swmike@swm.pp.se" target="_blank">swmike@swm.pp.se</a></font></span><br>_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
<br></blockquote></div><br></div>