Historic archive of defunct list bloat-devel@lists.bufferbloat.net
 help / color / mirror / Atom feed
* PF_ring and friends: Options for making Linux suck less when capturing packets
@ 2011-10-19 16:44 Dave Taht
  2011-10-19 16:52 ` Stephen Hemminger
  2011-10-19 17:02 ` Jim Gettys
  0 siblings, 2 replies; 5+ messages in thread
From: Dave Taht @ 2011-10-19 16:44 UTC (permalink / raw)
  To: bloat-devel, Rick Jones

[-- Attachment #1: Type: text/plain, Size: 1600 bytes --]

Currently I can do tcpdump -i eth1 -s 200 -w /some/usb/stick.cap at about
1.2 - 2MB/sec before saturating cpu on the wndr3700v2. (MB =megabyte)

I can r/w a usb stick at about 8/7 MB/sec. I haven' tried a 'real' hard
disk.

About 50Mbit/sec I figure covers the 95 percentile of most home users to
their ISP. 100Mbit would be better. Being drop-free would be really helpful
on shorter tests....

I was also thinking about an in-kernel module that uses 'splice' to send the
data to a file... as well as  the current jit work for bpf, using netfilter,
and various other alternatives.

Or writing something in a iptables or tc filter to track things more sanely
that web100 does....

Ideas?

---------- Forwarded message ----------
From: Fabian Schneider <fabian@ieee.org>
Date: Wed, Oct 19, 2011 at 6:03 PM
Subject: Options for making Linux suck less when capturing packets
To: Dave Täht <dave.taht@gmail.com>
Cc: Ahlem Reggani



Hi Dave,

as promised here are some pointers.

- http://www.ntop.org/products/pf_ring/

- And i think that libpcap since version 1.0 has builtin support for memory
mapping, which was propose by Phil Woods [1].

- It might be worthwhile to check if the NIC supports any sort of interrupt
coalescing or polling, instead of standard one interrupt per packet.

- I have to search a bit more for the code of my student (I changed
employers twice since then).

best
Fabian

[1] http://public.lanl.gov/cpw/ramblings.html



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net

[-- Attachment #2: Type: text/html, Size: 2133 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PF_ring and friends: Options for making Linux suck less when capturing packets
  2011-10-19 16:44 PF_ring and friends: Options for making Linux suck less when capturing packets Dave Taht
@ 2011-10-19 16:52 ` Stephen Hemminger
  2011-10-19 17:23   ` Hal Murray
  2011-10-19 17:02 ` Jim Gettys
  1 sibling, 1 reply; 5+ messages in thread
From: Stephen Hemminger @ 2011-10-19 16:52 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat-devel

On Wed, 19 Oct 2011 18:44:08 +0200
Dave Taht <dave.taht@gmail.com> wrote:

> Currently I can do tcpdump -i eth1 -s 200 -w /some/usb/stick.cap at about
> 1.2 - 2MB/sec before saturating cpu on the wndr3700v2. (MB =megabyte)
> 
> I can r/w a usb stick at about 8/7 MB/sec. I haven' tried a 'real' hard
> disk.
> 
> About 50Mbit/sec I figure covers the 95 percentile of most home users to
> their ISP. 100Mbit would be better. Being drop-free would be really helpful
> on shorter tests....
> 
> I was also thinking about an in-kernel module that uses 'splice' to send the
> data to a file... as well as  the current jit work for bpf, using netfilter,
> and various other alternatives.
> 
> Or writing something in a iptables or tc filter to track things more sanely
> that web100 does....
> 
> Ideas?

USB sticks are real slow. Even some infinitely fast capture isn't going
to get around that.  Get a real SSD and put it in enclosure that supports
USB 3.0?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PF_ring and friends: Options for making Linux suck less when capturing packets
  2011-10-19 16:44 PF_ring and friends: Options for making Linux suck less when capturing packets Dave Taht
  2011-10-19 16:52 ` Stephen Hemminger
@ 2011-10-19 17:02 ` Jim Gettys
  1 sibling, 0 replies; 5+ messages in thread
From: Jim Gettys @ 2011-10-19 17:02 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat-devel

On 10/19/2011 12:44 PM, Dave Taht wrote:
>
> Currently I can do tcpdump -i eth1 -s 200 -w /some/usb/stick.cap at
> about 1.2 - 2MB/sec before saturating cpu on the wndr3700v2. (MB
> =megabyte)
>
> I can r/w a usb stick at about 8/7 MB/sec. I haven' tried a 'real'
> hard disk.
>
> About 50Mbit/sec I figure covers the 95 percentile of most home users
> to their ISP. 100Mbit would be better. Being drop-free would be really
> helpful on shorter tests....
>
> I was also thinking about an in-kernel module that uses 'splice' to
> send the data to a file... as well as  the current jit work for bpf,
> using netfilter, and various other alternatives.

There may be faster USB sticks as well.  Certainly, SD cards vary hugely
for write performance, and you can buy way faster ones than the "cheap"
ones you get.

For example:
http://www.amazon.com/Patriot-Xporter-Speed-Flash-PEF32GRUSB/dp/B003WUX6RO/ref=sr_1_1?ie=UTF8&qid=1319043694&sr=8-1
<http://www.amazon.com/Patriot-Xporter-Speed-Flash-PEF32GRUSB/dp/B003WUX6RO/ref=sr_1_1?ie=UTF8&qid=1319043694&sr=8-1>

is much, much faster than most USB sticks (though most HDD's will still
outperform it.
                        - Jim


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PF_ring and friends: Options for making Linux suck less when  capturing packets
  2011-10-19 16:52 ` Stephen Hemminger
@ 2011-10-19 17:23   ` Hal Murray
  2011-11-05 11:30     ` Petri Rosenström
  0 siblings, 1 reply; 5+ messages in thread
From: Hal Murray @ 2011-10-19 17:23 UTC (permalink / raw)
  To: bloat-devel


> USB sticks are real slow. Even some infinitely fast capture isn't going to
> get around that.  Get a real SSD and put it in enclosure that supports USB
> 3.0? 

I don't think a SSD is necessary.

100 megabits is 12.5 megabytes.

I just did a quick test.  I can read 19-20 megabytes/sec from a rotating disk 
over USB.

That's the raw hardware.  I don't know how much overhead the file system adds.



-- 
These are my opinions, not necessarily my employer's.  I hate spam.




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PF_ring and friends: Options for making Linux suck less when capturing packets
  2011-10-19 17:23   ` Hal Murray
@ 2011-11-05 11:30     ` Petri Rosenström
  0 siblings, 0 replies; 5+ messages in thread
From: Petri Rosenström @ 2011-11-05 11:30 UTC (permalink / raw)
  To: bloat-devel

Hi,

I tried something like this with a wndr3800. I connected a usb powered
hdd (didn't test for speed, but if memory serves it's about
20megabytes).

Test 1.
Send some small (40 bytes) packages to the route (internal network ->
router (100 pkts/s))
run tcpdump -i eth1 -s 200 -w /some/usb/hdd.cap
Result 1.
It fills about 20kb/s of the memory. The CPU usage is about 100% from the start.

Test 2.
Send some small (40 bytes) packages to the route (internal network ->
router (20 pkts/s))
run tcpdump -i eth1 -s 200 -w /some/usb/hdd.cap
Result 2.
CPU usage about 80%. No noticeable memory consumption.

And of course if skipping the -s option from tcpdump there are no
issues with these network loads.

Best regards
Petri Rosenström

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-11-05 11:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-10-19 16:44 PF_ring and friends: Options for making Linux suck less when capturing packets Dave Taht
2011-10-19 16:52 ` Stephen Hemminger
2011-10-19 17:23   ` Hal Murray
2011-11-05 11:30     ` Petri Rosenström
2011-10-19 17:02 ` Jim Gettys

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox