oprofiling is much saner looking now with rc6-smoketest

Rick Jones rick.jones2 at hp.com
Tue Aug 30 18:01:17 PDT 2011


On 08/30/2011 05:32 PM, Dave Taht wrote:
> I get about 190Mbit/sec from netperf now, on GigE, with oprofiling
> enabled, driver buffers of 4, txqueue of 8, cerowrt default iptables
> rules, AND web10g patched into kernel 3.0.3.
>
> This is much saner than rc3, and judging from the csum_partial and
> copy_user being roughly equal, there isn't much left to be gained...
 >
> Nice work.
>
> (Without oprofiling, and without web10g and with tcp cubic I can get
> past 250Mbit)
>
>
> CPU: MIPS 24K, speed 0 MHz (estimated)
> Counted INSTRUCTIONS events (Instructions completed) with a unit mask
> of 0x00 (No unit mask) count 100000
> samples  %        app name                 symbol name
> -------------------------------------------------------------------------------
> 17277    13.8798  vmlinux                  csum_partial
>    17277    100.000  vmlinux                  csum_partial [self]
> -------------------------------------------------------------------------------
> 16607    13.3415  vmlinux                  __copy_user
>    16607    100.000  vmlinux                  __copy_user [self]
> -------------------------------------------------------------------------------
> 11913     9.5705  ip_tables                /ip_tables
>    11913    100.000  ip_tables                /ip_tables [self]
> -------------------------------------------------------------------------------
> 8949      7.1893  nf_conntrack             /nf_conntrack
>    8949     100.000  nf_conntrack             /nf_conntrack [self]
>
> In this case I was going from laptop - gige - through another
> rc6-smoketest router - to_this_box's internal lan port.
>
> It bugs me that iptables and conntrack eat so much cpu for what
> is an internal-only connection, e.g. one that
> doesn't need conntracking.

The csum_partial is a bit surprising - I thought every NIC and its dog 
offered CKO these days - or is that something happening with 
ip_tables/contrack?  I also thought that Linux used an integrated 
copy/checksum in at least one direction, or did that go away when CKO 
became prevalent?

If this is inbound, and there is just plain checksumming and not 
anything funny from conntrack, I would have expected checksum to be much 
larger than copy.  Checksum (in the inbound direction) will take the 
cache misses and the copy would not.  Unless... the data cache of the 
processor is getting completely trashed - say from the netserver running 
on the router not keeping up with the inbound data fully and so the copy 
gets "far away" from the checksum verification.

Does perf/perf_events (whatever the followon to perfmon2 is called) have 
support for the CPU used in the device?  (Assuming it even has a PMU to 
be queried in the first place)

> That said, I understand that people like their statistics, and me,
> I'm trying to make split-tcp work better, ultimately, one day....
>
> I'm going to rerun this without the fw rules next.

It would be interesting to see if the csum time goes away.  Long ago and 
far away when I was beating on a 32-core system with aggregate netperf 
TCP_RR and enabling or not FW rules, conntrack had a non-trivial effect 
indeed on performance.

http://markmail.org/message/exjtzel7vq2ugt66#query:netdev%20conntrack%20rick%20jones%2032%20netperf+page:1+mid:s5v5kylvmlfrpb7a+state:results

I think will get to the start of that thread.  The subject is '32 core 
net-next stack/netfilter "scaling"'

rick jones


More information about the Bloat-devel mailing list