[linux-lvm] vgdisplay - checksum error - what does it mean?

Tomasz Chmielewski mangoo at wpkg.org
Fri Feb 16 09:44:54 UTC 2007


Luca Berra schrieb:
> On Thu, Feb 15, 2007 at 04:23:33PM +0100, Tomasz Chmielewski wrote:
>> Tomasz Chmielewski schrieb:
>>> Recently, I used "vgdisplay", and noticed that it gives a "checksum 
>>> error":
>>>
>>> # vgdisplay
>>>   /dev/sda2: Checksum error
> ....
>>>
>>> Should I be scared? What does it mean? What should I do about it? I 
>>> wouldn't like to loose the data.
>>>
>>> If it helps, my setup looks like that:
>>>
>>> HDD1-sda2-\
>>> HDD2-sdb2-|__RAID-10--LVM-2
>>> HDD3-sdc2-|
>>> HDD4-sdd2-/
>>>
>>> I'm running 2.6.17.8 kernel.
>>
> ...
>>
>> So this basically means, that LVM was set up on /dev/sda2 some time 
>> ago, but it was never removed from there - instead, RAID-10 was set up 
>> on that partition?
> 
> I don't think so. if sda2 is part of a raid10 md array probably the
> beginning sector of the md device maps to the beginning sector of the
> real device, hence lvm will find an lvm signature on /dev/sda2.

Is there a way to check if it's really the case?

There's something wrong with /dev/sda2 - lvmdiskscan claims it's a 
371.58 GB LVM physical volume, while /dev/md2 is the physical volume I use.

   /dev/sda2                     [      371.58 GB] LVM physical volume
   /dev/md2                      [      743.16 GB] LVM physical volume


>> Should I do something to fix the things? What?
> yes, re-enable md_component_detection in lvm.conf, why did you disable
> that?

Certainly I didn't touch anything in /etc/lvm/*.
If I look into /etc/lvm/lvm.conf, it says:

devices {
(...)
     # By default, LVM2 will ignore devices used as components of
     # software RAID (md) devices by looking for md superblocks.
     # 1 enables; 0 disables.
     md_component_detection = 1
}

It's enabled.

So the problem is somewhere else. Where?


BTW, the machine is running Debian etch (ARM port).

"smartctl" says all four disks are fine (they are quite new, too), so 
it's definitely not a hardware problem.


I guess one way to fix it would be mark all partitions faulty on 
/dev/sda, and then, to recreate the RAIDs.
But I'm curious to know how could I handle such a situation if I didn't 
have RAID.



-- 
Tomasz Chmielewski
http://wpkg.org




More information about the linux-lvm mailing list