From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-iy0-f171.google.com (mail-iy0-f171.google.com [209.85.210.171]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 1E4B920216B for ; Tue, 30 Aug 2011 20:28:29 -0700 (PDT) Received: by iagk10 with SMTP id k10so477682iag.16 for ; Tue, 30 Aug 2011 20:28:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=c74bszkWMUbJBdZ3v9OlUHkzUUe9jgh50eLcoeMOtpQ=; b=hmxseUil/VPrHjm7sEB+pTRYpM9P1OVFLVkqD5jWuT6Oh/hVp+37pmHXhKGjb6vWle d8GU6Rz43JEIoO+JkhpKWzXiEDAvBh1D/5I5kc3EPwXzipxXc1bjs8hu4AfObOuo7DIC BbE10QdLvhG0AZY9EX8JVuymfNnpRUvfeIm8I= MIME-Version: 1.0 Received: by 10.42.130.1 with SMTP id t1mr1236989ics.480.1314761308119; Tue, 30 Aug 2011 20:28:28 -0700 (PDT) Received: by 10.43.44.6 with HTTP; Tue, 30 Aug 2011 20:28:28 -0700 (PDT) In-Reply-To: References: <4E5D87DD.7040705@hp.com> Date: Tue, 30 Aug 2011 20:28:28 -0700 Message-ID: Subject: Re: oprofiling is much saner looking now with rc6-smoketest From: Dave Taht To: Rick Jones Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: bloat-devel X-BeenThere: bloat-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Developers working on AQM, device drivers, and networking stacks" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Aug 2011 03:28:29 -0000 I took a little more time out to play with netperf at these extreme performance values, while puzzled about the performance knee observed midway through the previous tests. The three tests runs this evening (and captures!) are up at: http://huchra.bufferbloat.net/~d/rc6-smoke-captures/ For test 3, I rebooted the router into it's default tx ring (64), and set a txqueuelen of 128, running cubic... Measured throughput was mildly better (admittedly on a fresh boot, oprofile not even loaded) 229Mbit, and we didn't have a drop off at all, so I'm still chasing that... What I found interesting was the 10 second periodicity of the drop-offs. My assumption is that this is a timer being fired from somewhere (netperf?) that blocks the transmission... http://huchra.bufferbloat.net/~d/rc6-smoke-captures/txqueuelen128and10secon= ddropcycle.png test 4 will repeat the above sans oprofile, with the current default cerowrt settings for dma tx (4) and txqueuelen 8. If I get to it tonight. On Tue, Aug 30, 2011 at 6:58 PM, Dave Taht wrote: > I have put the current rc6 smoketest up at: > > http://huchra.bufferbloat.net/~cero1/rc6-smoketest/ > > So far it's proving very stable. Wireless performance is excellent and > wired performance dramatically improved. No crash bugs thus far, > though I had a scare... > > For the final rc6, which I hope to have done by friday, I'm in the > process of cleanly re-assembling the patch set (sorry, the sources are > a bit of a mess at present). For this rc, I'm hoping that a new > iptables lands, in particular, and I have numerous other little things > in the queue to sort out. > > All that said, getting oprofile running is not hard, and I do > appreciate smoke testers helping out!!! as I don't think I'll be able > to get another release candidate done before linux plumbers. > > install the correct image on your router from the above via web > interface or sysupgrade -n > reboot > edit /etc/opkg.conf to have that url in it > opkg update > opkg install oprofile > cd /tmp > mkdir /tmp/oprofile > wget http://huchra.bufferbloat.net/~d/rc6-smoke-captures/vmlinux > opcontrol --vmlinux=3D/tmp/vmlinux --session-dir=3D/tmp/oprofile (saving > profile data to flash is a bad idea) > > opcontrol --start > # do your testing > opcontrol --dump > > opreport -c # or whatever options you like. > > > On Tue, Aug 30, 2011 at 6:45 PM, Dave Taht wrote: >> On Tue, Aug 30, 2011 at 6:01 PM, Rick Jones wrote: >>> On 08/30/2011 05:32 PM, Dave Taht wrote: >> >>>> It bugs me that iptables and conntrack eat so much cpu for what >>>> is an internal-only connection, e.g. one that >>>> doesn't need conntracking. >>> >>> The csum_partial is a bit surprising - I thought every NIC and its dog >>> offered CKO these days - or is that something happening with >>> ip_tables/contrack? >> >> If this chipset supports it, so far as I know, it isn't documented or >> implemented. >> >>> I also thought that Linux used an integrated >>> copy/checksum in at least one direction, or did that go away when CKO b= ecame >>> prevalent? >> >> Don't know. >> >>> >>> If this is inbound, and there is just plain checksumming and not anythi= ng >>> funny from conntrack, I would have expected checksum to be much larger = than >>> copy. =A0Checksum (in the inbound direction) will take the cache misses= and >>> the copy would not. =A0Unless... the data cache of the processor is get= ting >>> completely trashed - say from the netserver running on the router not >>> keeping up with the inbound data fully and so the copy gets "far away" = from >>> the checksum verification. >> >> 220Mbit isn't good enough for ya? Previous tests ran at about 140Mbit, b= ut due >> to some major optimizations by felix to fix a bunch of mis-alignment >> issues. Through the router, I've seen 260Mbit - which is perilously >> close to the speed that I can drive it at from the test boxes. >> >>> >>> Does perf/perf_events (whatever the followon to perfmon2 is called) hav= e >>> support for the CPU used in the device? =A0(Assuming it even has a PMU = to be >>> queried in the first place) >> >> Yes. Don't think it's enabled. It is running flat out, according to top. >> >>> >>>> That said, I understand that people like their statistics, and me, >>>> I'm trying to make split-tcp work better, ultimately, one day.... >>>> >>>> I'm going to rerun this without the fw rules next. >>> >>> It would be interesting to see if the csum time goes away. =A0Long ago = and far >>> away when I was beating on a 32-core system with aggregate netperf TCP_= RR >>> and enabling or not FW rules, conntrack had a non-trivial effect indeed= on >>> performance. >> >> Stays about the same. iptables time drops. How to disable conntrack? >> Don't you only really >> need it for nat? >> >>> >>> http://markmail.org/message/exjtzel7vq2ugt66#query:netdev%20conntrack%2= 0rick%20jones%2032%20netperf+page:1+mid:s5v5kylvmlfrpb7a+state:results >>> >>> I think will get to the start of that thread. =A0The subject is '32 cor= e >>> net-next stack/netfilter "scaling"' >>> >>> rick jones >>> >> >> >> >> -- >> Dave T=E4ht >> SKYPE: davetaht >> US Tel: 1-239-829-5608 >> http://the-edge.blogspot.com >> > > > > -- > Dave T=E4ht > SKYPE: davetaht > US Tel: 1-239-829-5608 > http://the-edge.blogspot.com > --=20 Dave T=E4ht SKYPE: davetaht US Tel: 1-239-829-5608 http://the-edge.blogspot.com