From: David Collier-Brown <davec-b@rogers.com>
To: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Hardware upticks
Date: Tue, 5 Jan 2016 15:13:06 -0500 [thread overview]
Message-ID: <568C23D2.8060708@rogers.com> (raw)
In-Reply-To: <568C1B5E.1070008@taht.net>
The SPARC T5 is surprisingly good here, with a very short path to cache
and a moderate number of threads with hot cache lines. Cache
performance was one of the surprises when the slowish early T-machines
came out, and surprised a smarter colleague and I who had apps
bottlenecking on cold cache lines on what were nominally much faster
processors.
I'd love to have a T5-1 on an experimenter board, or perhaps even in my
laptop (I used to own a SPARC laptop), but that's not where Snoracle is
going.
--dave
On 05/01/16 02:37 PM, Dave Täht wrote:
>
> On 1/5/16 11:29 AM, Steinar H. Gunderson wrote:
>> On Tue, Jan 05, 2016 at 10:57:13AM -0800, Dave Täht wrote:
>>> Context switch time is probably one of the biggest hidden nightmares in
>>> modern OOO cpu architectures - they only go fast in a straight line. I'd
>>> love to see a 1ghz processor that could context switch in 5 cycles.
>> It's called hyperthreading? ;-)
>>
>> Anyway, the biggest cost of a context switch isn't necessarily the time used
>> to set up registers and such. It's increased L1 pressure; your CPU is now
>> running different code and looking at (largely) different data.
> +10.
>
> A L1/L2 Icache dedicated to interrupt processing code could make a great
> deal of difference, if only cpu makers and benchmarkers would make
> CS time something we valued.
>
> Dcache, not so much, except for the intel architectures which are now
> doing DMA direct to cache. (any arms doing that?)
>
>> /* Steinar */
>>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
davecb@spamcop.net | -- Mark Twain
next prev parent reply other threads:[~2016-01-05 20:13 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-05 6:37 Jonathan Morton
2016-01-05 17:42 ` Aaron Wood
2016-01-05 18:27 ` Jonathan Morton
2016-01-05 18:57 ` Dave Täht
2016-01-05 19:29 ` Steinar H. Gunderson
2016-01-05 19:37 ` Dave Täht
2016-01-05 20:13 ` David Collier-Brown [this message]
2016-01-05 20:27 ` Stephen Hemminger
2016-01-05 21:10 ` Jonathan Morton
2016-01-05 23:20 ` Steinar H. Gunderson
2016-01-05 23:17 ` Steinar H. Gunderson
2016-01-05 21:36 ` Benjamin Cronce
2016-01-06 0:01 ` Steinar H. Gunderson
2016-01-06 0:06 ` Stephen Hemminger
2016-01-06 0:22 ` Steinar H. Gunderson
2016-01-06 0:53 ` Stephen Hemminger
2016-01-06 0:55 ` Steinar H. Gunderson
2016-01-06 1:22 ` Stephen Hemminger
2016-01-06 6:18 ` Jonathan Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=568C23D2.8060708@rogers.com \
--to=davec-b@rogers.com \
--cc=bloat@lists.bufferbloat.net \
--cc=davecb@spamcop.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox