[Linux-cluster] managing GFS corruption on large FS

Patton, Matthew F, CTR, OSD-PA&E Matthew.Patton.ctr at osd.mil
Wed Nov 29 19:41:33 UTC 2006


Classification: UNCLASSIFIED


> 3. gfs_fsck takes a lot of memory to run, and when it runs 
> out of memory,
>     it will start swapping to disk, and that will slow it 
> down considerably.
>     So be sure to run it on a system with lots of memory.

define "lots" please.

 
> 6. I recently discovered an issue that impacts GFS 
> performance for large

>     requires approximately 156438 RGs of 256MB each.  Whenever GFS
>     has to run that linked list, it takes a long time.

>     For RHEL5, I'm changing gfs_mkfs so that it picks a more 
> intelligent
>     RG size based on the file system size, 

RG Structures of 4G or 8G seem reasonable to me. Granted I don't know what the RG's do and what all is involved in the housekeeping. 256mb structures probably makes sense up to say 1/4TB volumes. <1/2TB would take 512mb structs and <1TB would be 1G structures.  Some quick math and I think you'll see where I'm going with this.

>     Unfortunately, there's no way to change the RG size once 
> a file system
>     has been made.  It only happens at gfs_mkfs time.

much like vgcreate. 




More information about the Linux-cluster mailing list