[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: file locking...

hi bruno.

for my situation. i have a bunch of files being created by an upfront
process, and on the backend, i have a number of client/child processes that
get created, which have to operate/process the files. no file is processed
by more than a single client app.

i'm considering a web kind of app, where the webservice returns a group of
files back to the requesting query. the webservice would have to quickly
generate the list of files from the local dir, which is why i'm trying to
figure out a "fast/efficient" approach for this scenario.

using any kind of file locking process requires that i essentially have a
gatekeeper, allowing a single process to enter, access the files at a

i can easily setup a file read/write lock process where a client app
gets/locks a file, and then copies/moves the required files from the initial
dir to a tmp dir. after the move/copy, the lock is released, and the client
can go ahead and do whatever with the files in the tmp dir.. thie process
allows multiple clients to operate in a psuedo parallel manner...

i'm trying to figure out if there's a much better/faster approach that might
be available.. which is where the academic/research issue was raised..

the issue that i'm looking at is analogous to a FIFO, where i have lots of
files being shoved in a dir from different processes.. on the other end, i
want to allow mutiple client processes to access unique groups of these
files as fast as possible.. access being fetch/gather/process/delete the
files. each file is only handled by a single client process.


-----Original Message-----
From: Bruno Wolff III [mailto:bruno wolff to]
Sent: Saturday, February 28, 2009 10:05 PM
To: bruce
Cc: 'Community assistance, encouragement, and advice for using Fedora.'
Subject: Re: file locking...

On Sat, Feb 28, 2009 at 21:47:39 -0800,
  bruce <bedouglas earthlink net> wrote:
> However, the issue with the approach is that it's somewhat synchronous.
> looking for something that might be more asynchronous/parallel, in that
> like to have multiple processes each access a unique group of files from
> given dir as fast as possible.

If each process is accessing a unique group of files why do you need
Can you explain a bit more about what is going on?

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]