[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: file locking...



On Sat, 28 Feb 2009 21:47:39 -0800
bruce wrote:

> However, the issue with the approach is that it's somewhat synchronous. I'm
> looking for something that might be more asynchronous/parallel, in that I'd
> like to have multiple processes each access a unique group of files from the
> given dir as fast as possible.

Then just do that.  You don't need to do anything special to have process A
access files B and C, and process D access files E and F.   Coordination is
irrelevant for this task.  Why would you think otherwise?

If you need to use the same file for two or more processes, each of which will
take a while to run, then create a new copy of the file if the copying process
will take less time than the task that you intend to run on the file, and then
you can still access a separate file for each process.

Depending on file sizes and the nature of the job, it may be useful to create a
ramdisk to hold the temporary datafile(s), if any.  Or just write the task to
load the data into memory on startup and work with it from there.

-- 
MELVILLE THEATRE ~ Melville Sask ~ http://www.melvilletheatre.com


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]