[Linux-cluster] Directories with >100K files

nick at javacat.f2s.com nick at javacat.f2s.com
Mon Jan 26 09:24:20 UTC 2009


Hi Jeff

Quoting Jeff Sturm <jeff.sturm at eprize.com>:

> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com
> > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of
> > nick at javacat.f2s.com
> > Sent: Wednesday, January 21, 2009 8:29 AM
> > To: linux clustering
> > Subject: RE: [Linux-cluster] Directories with >100K files
> >
> > What is the way forward now ? I've got users complaining left
> > right and centre. Should I ditch GFS and use NFS ?
>
> You've hit an area where GFS doesn't work so well.  I don't know if NFS
> will be much better--others with more experience may know.  (For our
> application we solely use GFS over other shared filesystem technologies
> because we require strict posix locking.)
>
> Your options seem to be:
>
> A) Limit FS activity to as few nodes as possible.  (Does it perform
> suitably when mounted on only a single node?)
>
> B) Crank up demote_secs, an hour or more, until it either relieves your
> problem, or cripples the system because too many locks are held too
> long.  (I have a filesystem here with demote_secs=86400 so we can get
> generally good rsync performance with over 50,000 file/directory
> entries.)
>
> C) Use some alternative to GFS.
>
> Sorry if there's not a better answer.

I'm going to have to just keep working at this to see what we can do.
If we get a fix I'll post back.

Thanks for your help.

Nick.









More information about the Linux-cluster mailing list