[Cerowrt-devel] speeding up builds
Dave Taht
dave.taht at gmail.com
Sun Apr 29 22:52:56 EDT 2012
On Sun, Apr 29, 2012 at 7:42 PM, Dave Taht <dave.taht at gmail.com> wrote:
> On Sun, Apr 29, 2012 at 7:24 PM, Dave Taht <dave.taht at gmail.com> wrote:
>> On Sun, Apr 29, 2012 at 6:59 PM, Dave Taht <dave.taht at gmail.com> wrote:
>>> On Sun, Apr 29, 2012 at 6:42 PM, Outback Dingo <outbackdingo at gmail.com> wrote:
>>>> On Sun, Apr 29, 2012 at 8:15 PM, Dave Taht <dave.taht at gmail.com> wrote:
>>>>> I finally acquired a machine with 32GB of ram, an intel 3930k (6
>>>>> cores), and an SSD.
>>>>>
>>>>> I put the build_dir, /tmp and /var/tmp on ramdisks, and...
>>>>>
>>>>> This cut a complete cerowrt build (including toolchain) down from >
>>>>> 3.5 hrs down to under 45 minutes.
>>>>>
>>>>> Without the toolchain rebuild, but after a make clean (to rebuild the
>>>>> packages and kernel), it's about 28 minutes.
>>>>>
>>>>> I can see that it is possible to parallelize things more to maybe chop
>>>>> another 30% of of things...
>>>>> ...but I'm glad to have 3 hrs of my life back, per build.
>>>>>
>>>>> I wanted to figure out to what extent modern hardware would enhance
>>>>> the existing buildbot system.
>>>>> Now I know...
>>>>
>>>> odd my laptop will do a full build with tool chain in about an
>>>> hour..... its only a core i3 with 6gb and an ssd
>>>
>>> The best box that I had was huchra, a dual quad-core xeon circa 2006,
>>> with 8GB of memory and mirrored drives.
>>>
>>> A 'full build' of cero is 578 packages, some of which are rather big,
>>> as well as building the sdk and cross development kit.
>>>
>>> For comparison purposes, I just built linux-3.3.4 for ubuntu (so this
>>> includes the kpkg overhead)
>>>
>>> real 11m12.286s
>>> user 67m11.076s
>>> sys 7m19.955s
>>>
>>> I am puzzled. I end up with only 75MB for disk buffers, according to
>>> top, and I would assume that 25% of memory in this case would be good
>>> for disk buffers.
>>>
>>> I do like using ramdisks for this job, (why write to media unless you
>>> have to?) but it seems saner to have the disk cache, caching.
>>
>> Ah. I assume that 'cached' here means disk buffers. Maybe.
>>
>> total used free shared buffers cached
>> Mem: 32927452 28799604 4127848 0 75600 25122928
>> -/+ buffers/cache: 3601076 29326376
>> Swap: 33529852 1527668 32002184
>>
>> Believe me, after doing the number of builds I've done this year, and
>> especially in the past two months, finding ways to shave even a few
>> minutes more off the build(s) would be a godsend.
>>
>> This particular box can do 64GB of ram, and doing that would add two
>> channels to the memory controller, assuming I plugged the ram in
>> wrong...
>>
>> anyway, a pure kernel build (no kpkg),
>>
>> time make -j 24
>>
>> real 7m33.494s
>> user 73m3.146s
>> sys 6m31.648s
>>
>> I see from the phoronix benchmarks that they claim a box of this
>> caliber can do a kernel build in under 60sec, but I doubt they are
>> using a kernel of this size.
>>
>> I've tossed the kernel .deb files, kernel config, script to make it a
>> deb, and patches here:
>>
>> http://huchra.bufferbloat.net/~d/debloat/
>>
>> (note - TOTALLY untested on x86_64 as yet -)
>>
>> I'd gotten out of the habit of maintaining debloat-testing mostly
>> because doing a kernel build was taking so bloody long.
>
> And I just did a build right to the ssd, no ramdisk...
>
> real 7m41.516s
> user 71m6.395s
> sys 6m24.132s
>
> So it looks like, at least at present, with an SSD I/O is not the
> bottleneck... Now, from a buildbot perspective I'd really rather not
> light up a SSD but use up ram. Although I'm told they have got better.
>
> I still dream of 60s kernel builds tho... hah. the phoronix build test
> is available to all...
>
> /me has cpu cycles to burn and is working on something else
Phoronix's test suite is pretty neat. 58 seconds for a build with
their kernel config.
phoronix-test-suite benchmark build-linux-kernel
http://openbenchmarking.org/result/1204292-BY-SNAPON08835
So I'm no longer concerned that maybe I goofed on this box. I didn't
realize just how fast the SSD could be, tho, going in.
I could probably do a bit better with a dual socket xeon gulftown,
or maybe with a 48 core opteron, but that drives the price way up.
I used to use things like ccache and dist-cc for stuff like this, too,
but not for embedded, having to have a shared filesystem usually
killed things.
--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net
More information about the Cerowrt-devel
mailing list