[Linux-cluster] managing GFS corruption on large FS

Robert Peterson rpeterso at redhat.com
Wed Nov 29 20:35:17 UTC 2006


Patton, Matthew F, CTR, OSD-PA&E wrote:
>> 3. gfs_fsck takes a lot of memory to run, and when it runs 
>> out of memory,
>>     it will start swapping to disk, and that will slow it 
>> down considerably.
>>     So be sure to run it on a system with lots of memory.
>>     
> define "lots" please.
>   
If I did the math correctly (and that's a leap) gfs_fsck needs approximately
1GB of memory (plus swap) for every 4TB of file system.
So a 40TB fs requires approximately 10GB of memory plus swap, etc.
I'm looking into ways I can reduce this requirement.
> RG Structures of 4G or 8G seem reasonable to me. Granted I don't know what the RG's do and what all is involved in the housekeeping. 256mb structures probably makes sense up to say 1/4TB volumes. <1/2TB would take 512mb structs and <1TB would be 1G structures.  Some quick math and I think you'll see where I'm going with this.
>   
Unfortunately, there's a 2GB size limit for each RG in a GFS fs.  I need to
investigate whether the 2GB restriction is artificial or whether we can get
bigger if we need to.  Right now, my new RHEL5 gfs_mkfs just tries to keep
the number of RGs under 10000 and adjusts the RGs accordingly, up to the
2GB maximum.  I just committed this to the HEAD branch of CVS today.

Regards,

Bob Peterson
Red Hat Cluster Suite




More information about the Linux-cluster mailing list