[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS RG size (and tuning)

I had the same problems this week on GFS only about 800gig of mostly small files. My application did not require it to be mounted on the whole cluster concurrently so i went ahead and switched to EXT3 with much better performance.

Support said i would probably expect significant improvement with more RG's but we went with the other file format before we tried that.


On Oct 27, 2007, at 8:57 AM, Jos Vos wrote:

On Fri, Oct 26, 2007 at 07:57:18PM -0400, Wendy Cheng wrote:

1. 3TB is not "average size". Smaller RG can help with "df" command -
but if your system is congested, it won't help much.

The df also takes ages on an almost idle system. Also, the system often
needs to do rsyncs on large trees and this takes a very long time too.

In <http://sourceware.org/cluster/faq.html#gfs_tuning> it is suggested
that you should then make the RG larger (i.e. less RGs). As this requires shuffling aroung with TB's of data before recreating a GFS fs, I want to
have some idea of what my chances are that this is usefull.

2. The gfs_scand issue is more to do with the number of glock count. One way to tune this is via purge_glock tunable. There is an old write- up in: http://people.redhat.com/wcheng/Patches/GFS/ readme.gfs_glock_trimming.R4
. It is for RHEL4 but should work the same way for RHEL5.

I'll try.  I assume I can do this per system (so that I don't have to
bring the whole cluster down, only stop the cluster services and unmount
the GFS volumes per node)?

Any chance this patch will make it into the standard RHEL-package?
I want to avoid to maintain my own patched packages, although as long
as gfs.ko is in the separate kmod-gfs package that's doable.

--    Jos Vos <jos xos nl>
--    X/OS Experts in Open Systems BV   |   Phone: +31 20 6938364
--    Amsterdam, The Netherlands        |     Fax: +31 20 6948204

Linux-cluster mailing list
Linux-cluster redhat com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]