AW: [linux-lvm] removing a failed disk from VG

Kai Iskratsch kai at stella.at
Tue Jan 28 05:08:02 UTC 2003


would the first method of this also work for me.
I should get an identical harddisk from the vendor since i have still
warranty on it.

I tried allready the second one, and I was lucky to succsessfully get the
data out of 2 from my 3 LVs since they didn't use the broken disk.
The last one which used the broken Harddisk will not mount without a fs
scan, since this is not an option with a readonly FS I still couldn't even
try to get any data from it.

regards

Kai


> -----Ursprungliche Nachricht-----
> Von: linux-lvm-admin at sistina.com [mailto:linux-lvm-admin at sistina.com]Im
> Auftrag von Heinz J . Mauelshagen
> Gesendet: Dienstag, 28. Janner 2003 11:47
> An: linux-lvm at sistina.com
> Betreff: Re: [linux-lvm] removing a failed disk from VG
>
>
> On Mon, Jan 27, 2003 at 09:28:08PM +0100, Alois Schneider wrote:
> > I had a VG scsi_vg consisting of 4 PV's (sda1, sdb1, hdb1,hdc1).
> > Now /dev/sdb1 failed and was no longer recogniced by th SCSI-
> > controller. I had to remove it.
> >
>
> Alois,
>
> then you probably lost data with the gone disk. Backup time :-(
> Please make sure that you've got backups of /etc/lvmconf/* and
> /etc/lvmtab*
> before you proceed.
>
> There's 2 options:
>
> a. you get a replacement drive (which can be a loop device)
>    of the same size (4G), run pvcreate on it, restore the metadata onto
>    it (vgcfgrestore) and run "vgscan ; vgchange -ay"
>
> b. you implement LVM2/device-mapper and run "vgscan ; vgchange -P
> -ay", which
>    will activate your VG without the missing drive. This solution enables
>    you to retrieve the data still accessable _but_ won't let you change
>    your volume group configuration. We are working on an
> enhancement supporting
>    such changes.
>
> Regards,
> Heinz    -- The LVM Guy --
>
>
> > vgscan now fails with the following error:
> >
> > vgscan -- ERROR "vg_read_with_pv_and_lv(): current PV" can't get data
> > of volume group "scsi_vg" from physical volume(s)
> > vgscan -- reading all physical volumes (this may take a while...)
> > vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created
> > vgscan -- WARNING: This program does not do a VGDA backup of your
> > volume group
> >
> > vgchange -ay gives
> > -- no volume group found
> >
> > The remaining PV's seem to be ok and the failed disk was only 4Gb
> > large.
> >
> > Here is the output of pvdisplay:
> >
> > --- Physical volume ---
> > PV Name               /dev/sda1
> > VG Name               scsi_vg
> > PV Size               4.04 GB [8466192 secs] / NOT usable 1.88 MB
> > [LVM: 125 KB]
> > PV#                   1
> > PV Status             available
> > Allocatable           yes (but full)
> > Cur LV                1
> > PE Size (KByte)       4096
> > Total PE              1033
> > Free PE               0
> > Allocated PE          1033
> > PV UUID               pJsMFv-ci9Q-orNK-3cRr-2aTl-w1N9-TEHDI1
> >
> >
> > --- Physical volume ---
> > PV Name               /dev/hdb1
> > VG Name               scsi_vg
> > PV Size               76.69 GB [160826652 secs] / NOT usable 4.25 MB
> > [LVM: 200 KB]
> > PV#                   3
> > PV Status             available
> > Allocatable           yes (but full)
> > Cur LV                2
> > PE Size (KByte)       4096
> > Total PE              19631
> > Free PE               0
> > Allocated PE          19631
> > PV UUID               u5I40y-Op34-nnVV-aInj-Jekc-VJT7-s2q8OK
> >
> >
> > --- Physical volume ---
> > PV Name               /dev/hdc1
> > VG Name               scsi_vg
> > PV Size               19.01 GB [39873267 secs] / NOT usable 4.18 MB
> > [LVM: 143 KB]
> > PV#                   5
> > PV Status             available
> > Allocatable           yes
> > Cur LV                1
> > PE Size (KByte)       4096
> > Total PE              4866
> > Free PE               2040
> > Allocated PE          2826
> > PV UUID               3PCGsY-hUr6-QEUt-MkoJ-fKlX-24vG-pS0HJY
> >
> > What can I do? How can I remove the no longer existing PV sdb1 from
> > scsi_vg?
> >
> > Any help will be appreciated.
> > Regards
> > Alois
> >
> >
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm at sistina.com
> > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> *** Software bugs are stupid.
>     Nevertheless it needs not so stupid people to solve them ***
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> =-=-=-=-=-=-
>
> Heinz Mauelshagen                                 Sistina Software Inc.
> Senior Consultant/Developer                       Am Sonnenhang 11
>                                                   56242 Marienrachdorf
>                                                   Germany
> Mauelshagen at Sistina.com                           +49 2626 141200
>                                                        FAX 924446
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> =-=-=-=-=-=-
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
-------------- next part --------------
An embedded message was scrubbed...
From: "Heinz J . Mauelshagen" <mauelshagen at sistina.com>
Subject: Re: [linux-lvm] Broken Harddisk in LVM
Date: Wed, 22 Jan 2003 12:14:46 +0100
Size: 3016
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20030128/55a2207d/attachment.eml>


More information about the linux-lvm mailing list