[Linux-cluster] slow GFS2 stat() performance

Sven Karlsson karlesven at gmail.com
Sun Mar 14 23:54:21 UTC 2010


Hello,

We have a newly setup GFS2 cluster on a shared FC storage with three
nodes. After setting up the backup software we noticed that it ran
very slow, and investigated. It turns out that we have about a million
files in varying sizes, and that stat()'ing these takes a long time.

For example, a "find" command on the GFS share, limited to 4000 files,
takes 0.021s. A "find -ls" command in the same circumstances takes 17
seconds...!

And that is only to get the permissions/ownership etc, no data reading.

We've looked for GFS tuning tips and the filesystem is mounted with
noatime,nodiratime, statfs_slow=0, etc. ping_pong gives about 3200
locks/s for read and write. Also tried mounting only one node and
having a local lock manager, with some better performance but not
satisfactory. Using a non-cluster filesystem on the same SAN manages
"find -ls" in the order of a fraction of a second, which suggests that
it's GFS that .

Should it be this slow? Any hints to improve performance or to do debugging?

Regards
 SK




More information about the Linux-cluster mailing list