[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] slow GFS2 stat() performance


We have a newly setup GFS2 cluster on a shared FC storage with three
nodes. After setting up the backup software we noticed that it ran
very slow, and investigated. It turns out that we have about a million
files in varying sizes, and that stat()'ing these takes a long time.

For example, a "find" command on the GFS share, limited to 4000 files,
takes 0.021s. A "find -ls" command in the same circumstances takes 17

And that is only to get the permissions/ownership etc, no data reading.

We've looked for GFS tuning tips and the filesystem is mounted with
noatime,nodiratime, statfs_slow=0, etc. ping_pong gives about 3200
locks/s for read and write. Also tried mounting only one node and
having a local lock manager, with some better performance but not
satisfactory. Using a non-cluster filesystem on the same SAN manages
"find -ls" in the order of a fraction of a second, which suggests that
it's GFS that .

Should it be this slow? Any hints to improve performance or to do debugging?


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]