[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Current state of multi-core awareness



On Fri, 2008-12-05 at 17:42 +1930, Patrick O'Callaghan wrote:
On Fri, Dec 5, 2008 at 4:41 PM,  <dsavage peaknet net> wrote:

> Of the thousands of 64-bit F10 applications/tools/utilities, I wonder how
> many are aware of and can scale across multiple cores. Has anyone done a
> recent survey to see which packages are [not] multi-core aware?

I may be way off-base here, but I would expect very few if any apps
are "multi-core aware". Multiple cores get you better performance when
more than one process needs the cpu, but a single I/O-limited process
isn't going to go any faster. Likewise, single-threaded apps can't do
anything with multiple cores even if they aren't I/O limited.
Specialized parallel-programming apps are a different matter, but how
many of those do we typically see on a desktop?

poc
Depending on the app, you may be able to use extra cores (or async I/O) to overlap CPU intensive processing with the I/O wait. Even if, say, 80% of the run time is spent waiting on I/O, if you can overlap the 20% compute time you may be able to get a speedup of 10-20%.

And if you issue those I/O requests in parallel, you might get a speedup due to the OS parallielising RAID requests. Or maybe the disk driver can order the raw disk requests such that they're handled by a single sweep across the disk rather than doing lots of back and forth seeks. All of these could improve the run time.

You refer to "specialized parallel-programming apps". I think the point here is that we developers should be moving towards the idea that parallel programming is no longer a specialized thing, and we shouldn't think of it as such. Its been over a decade since I've considered multiple threads to be a rare, specialized thing.

A word processor is a desktop app, but spelling and grammar checking could reasonably be done by background threads. So could some sort of global analysis of the document.

Your IDE could analyze your code in the background, discover that you've just written a piece of code for the nth time, and suggest refactoring it into a utility function. Or it could be precompiling code as you type, function by function, so you don't have to wait for things to compile when you're ready to actually run the program. Or maybe whenever you change a function, it goes ahead and automatically runs the regression tests for that function and lets you know if you've introduced a bug.

Searching through files for something of interest could reasonably be done by a set of I/O threads that feed a set of CPU bound search threads. Ditto indexing of files.

When you open a directory in your file browser and 100 thumbnail images need to be generated, you could fetch the images in parallel (potential speed ups described above), and then generate the thumbnails in parallel.

All of these things are appropriate for a desktop. The key is for us developers to think about the things our programs could be doing in the background (both stuff we're already doing, and new stuff we could be doing). There's a lot of time between keystrokes. Make use of it.

Wayne.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]