[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Ondisk and fsck bitmaps differ at block XXXXXX



|-----Original Message-----
|From: linux-cluster-bounces redhat com [mailto:linux-cluster-
|bounces redhat com] On Behalf Of Bob Peterson
|Sent: Monday, April 19, 2010 2:53 PM
|To: linux clustering
|Subject: Re: [Linux-cluster] Ondisk and fsck bitmaps differ at block
|XXXXXX
|
|----- "de Jong, Mark-Jan" <deJongm teoco com> wrote:
|| Hello,
||
|| After running our two node cluster concurrently for a number of days,
|| I wanted to take a look at the health of the underlying GFS2 file
|| system.
||
||
||
|| And although we didn’t run into any problems during or testing, I was
|| surprised to still see hundreds of thousands, if not millions, of the
|| following message when running and gfs2_fsck on the file system:
||
||
||
|| Ondisk and fsck bitmaps differ at block 86432030 (0x526d91e)
||
|| Ondisk status is 1 (Data) but FSCK thinks it should be 0 (Free)
||
|| Metadata type is 0 (free)
||
|| Succeeded.
||
||
||
|| I assume this is not good, although processes reading and writing
|| to/from the filesystem seem to be running smoothly. I ran the fsck
|| days after the last write operation on the cluster.
||
||
||
|| I’m currently running the following on Centos 5.4:
||
||
||
|| kernel-2.6.18-164.15.1.el5
||
|| cman-2.0.115-1.el5_4.9
||
|| lvm2-cluster-2.02.46-8.el5_4.1
||
||
||
|| The GFS2 file system is running on top of a clustered LVM partition.
||
||
||
|| Any input would be greatly appreciated.
||
||
||
|| Thanks,
||
|| Mark de Jong
|
|Hi Mark,
|
|These messages might be due to (1) bugs in fsck.gfs2, (2) the gfs2
|kernel
|module, or (3) corruption leftover from older versions of gfs2.  I have
|a few recommendations:

As for point 2, are you saying it may be a bug in the latest gfs2 kernel module shipped with 5.4? And point 3, this was a GFS2 file system created with, and used only by, the latest gfs2 utils/kernel module.

|First, try running my latest and greatest "experimental" fsck.gfs2.
|It can be found on my people page at this location:

I just tried this and although I let the last fsck.gfs2 from gfs2-utils-0.1.62 run to completion, I got the following output from your latest:

./fsck.gfs2 -y /dev/store01/data01_shared 
Initializing fsck
Validating Resource Group index.
Level 1 RG check.
(level 1 passed)
RGs: Consistent: 9444   Inconsistent: 239   Fixed: 239   Total: 9683
Starting pass1
Pass1 complete      
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c completelete.
Starting pass2
Pass2 complete      
Starting pass3
Pass3 complete      
Starting pass4
Pass4 complete      
Starting pass5
Pass5 complete      
The statfs file is wrong:

Current statfs values:
blocks:  3172545020 (0xbd1931fc)
free:    3146073083 (0xbb8543fb)
dinodes: 27575 (0x6bb7)

Calculated statfs values:
blocks:  3172545020 (0xbd1931fc)
free:    3163737148 (0xbc92cc3c)
dinodes: 9985 (0x2701)
The statfs file was fixed.
Writing changes to disk
gfs2_fsck complete


|http://people.redhat.com/rpeterso/Experimental/RHEL5.x/gfs2/fsck.gfs2
|
|If you want to wait a day or two, I'll be posting another version there
|then because I've got an even better version I'm testing now.
|
|As for gfs2, we've fixed several bugs in 5.5 so you might want to
|look into moving up to 5.5 as well.
|
|Regards,
|
|Bob Peterson
|Red Hat File Systems
|
Thanks,
Mark


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]