[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [Linux-cluster] Directories with >100K files



> -----Original Message-----
> From: linux-cluster-bounces redhat com 
> [mailto:linux-cluster-bounces redhat com] On Behalf Of 
> nick javacat f2s com
> Sent: Wednesday, January 21, 2009 8:29 AM
> To: linux clustering
> Subject: RE: [Linux-cluster] Directories with >100K files
> 
> What is the way forward now ? I've got users complaining left 
> right and centre. Should I ditch GFS and use NFS ?

You've hit an area where GFS doesn't work so well.  I don't know if NFS
will be much better--others with more experience may know.  (For our
application we solely use GFS over other shared filesystem technologies
because we require strict posix locking.)

Your options seem to be:

A) Limit FS activity to as few nodes as possible.  (Does it perform
suitably when mounted on only a single node?)

B) Crank up demote_secs, an hour or more, until it either relieves your
problem, or cripples the system because too many locks are held too
long.  (I have a filesystem here with demote_secs=86400 so we can get
generally good rsync performance with over 50,000 file/directory
entries.)

C) Use some alternative to GFS.

Sorry if there's not a better answer.

Jeff



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]