Disk problem with LVM

Nick Geovanis <n-geovanis@northwestern.edu> nickgeo at merle.it.northwestern.edu
Wed Oct 5 21:42:27 UTC 2005


On Wed, 5 Oct 2005, Luc MAIGNAN wrote:

>  --- Volume group ---
>   VG Name               vg_home
>   System ID            
>   Format                lvm2

I note first that you are running lvm2; I only use lvm1 on linux so far,
and the original versions of it on AIX (where it was quite reliable).
Just in case, here is a quote from the RedHat LVM2 page: "To use LVM2 you
need 3 things: device-mapper in your kernel, the userspace device-mapper
support library (libdevmapper) and the userspace LVM2 tools." So I would
check to make sure that the device-mapper kernel module is loaded, however
I don't know its name. There was a kernel module for lvm1 with a different
name and no "device mapper" per se.

>   Metadata Areas        1
>   Metadata Sequence No  2
>   VG Access             read/write
>   VG Status             resizable

So the VG is not "available"; we need to find-out why. You could simply
attempt to mark it "available"; probably the worst that could happen is
that it refuses which it has already done. You do this by issuing
"vgchange --available y vg_home" or "vgchange -a y vg_home". Any
additional messages issued when you do this may provide more info.

>   MAX LV                0

This worries me. Current LVs is 1, open LVs is 1, but maximum LVs is 0.
Makes no sense. Same applies to Max PVs below. If I remember correctly,
this once happened to me on AIX some years ago, and the "fix" was simply
to increase the maximum values (with "vgchange" I think). Compare with
this lvm1 VG on RHEL3 with kernel 2.4.21-20.ELsmp:

>>>[root at ansel root]# vgdisplay -v content
>>>--- Volume group ---
>>>VG Name               content
>>>VG Access             read/write
>>>VG Status             available/resizable
>>>VG #                  1
>>>MAX LV                256
>>>Cur LV                1
>>>Open LV               1


>   Cur LV                1
>   Open LV               1
>   Max PV                0
>   Cur PV                1
>   Act PV                1
>   VG Size               9,54 GB
<.....cut.....>   
>   --- Physical volumes ---
>   PV Name               /dev/hdd1    
>   PV UUID               j94Puu-ZLCj-gIv4-8Hs9-W6v2-9Hea-f6S484
>   PV Status             allocatable
>   Total PE / Free PE    2441 / 1191

So the single PV is also not "available". So in the same sense we need to
make it "available". It may be that the physical volume is truly damaged.
Are there other, non-LVM slices on it? If yes, are they mountable? Is
there any evidence in the logs that hardware damage has occured? Maybe a
power outage to your machine? It may be time to get familiar with the LVM
doc and email lists. See the LVM Howto at tldp:
http://www.tldp.org/HOWTO/LVM-HOWTO/index.html
Is there anything suspicious in the boot log/dmesg from the last boot?

* Nick Geovanis
| IT Computing Svcs
| Northwestern Univ
| n-geovanis@
|   northwestern.edu
+------------------->





More information about the fedora-list mailing list