[linux-lvm] pvmove killed vg

gunther.kuhlmann at web.de gunther.kuhlmann at web.de
Wed Dec 12 06:26:07 UTC 2001


linux-lvm at sistina.com schrieb am 12.12.01:

Heinz,

thank you for your answer! I've done the pvcreate -ff on all
pv. One came up with ...volume group "<somegarbage>" where it
should have said volume group "vg0". So that fixed something,
I presume.

But how do I run the vgcfgrestore on _all_ PV? When I specify
one PV, it complains it can't restore part of active volume 
group. When I specify all PV's, it complains please enter
physical volume name. How do I do this? Does this work with
lvm 1.0.1? 

Commands like vgdisplay complain the VG does not exist.

As to your second suggestion: I have vg0.conf and vg0.<digit>.old
in /etc/lvmconf. I assume you mean the first?

Thanks for your input!

Regards,

Gunther

> you need to run vgcfgrestore on *all* PVs which where in your 
vg0 *after*
> running "pvcreate -ff" on them. YOu can find out which these 
where by
> "vgcfgrestore -ll -n vg0 -f 
/etc/lvmconf/WhateverYourRecentBackupFileIs".
> 
> If that doesn't work a hack to activate it anyway without 
running vgscan is
> to copy /etc/lvmconf/WhateverYourRecentBackupFileIs (I assume
> /etc/lvmconf/vg0.conf.cd in your case) to /etc/lvmtab.d/vg0,
> "echo -en 'vg0\0vg1\0' > /etc/lvmtab" and "vgchange -ay".
> Create a dummy LV with 1 PE afterwards and your 
> metadata on all PVs of the VG should be ok again.
> 
> Remember to back /etc/lvmconf/ regularly when you change your 
LVM configuration!
> 
> Regards,
> Heinz    -- The LVM Guy --
> 
> 
> We recommend to upgrade to LVM 1.0.1 because a couple of bugs 
(some related to
> pvmove) have been fixed.
> On Tue, Dec 11, 2001 at 10:34:24PM +0100, 
gunther.kuhlmann at web.de wrote:
> > Hi,
> > 
> > I've got a broken lvm vg as a result of an unsuccessful pvmove
> > command. My system is a SuSE 7.0 with kernel 2.4.4 and
> > lvm-0.9.1_beta7-10. 
> > 
> > The vg spans several partitions on one disks, one of which I 
> > wanted to evacuate. I used the pvmove command with chunks of 
> > 32 PE (= 1 GB); 10 GB total. The first 8 chunks worked okay,
> > but the 9th (pvmove -v /dev/hda8:256-287 /dev/hda10) fell over
> > with error code 23 while moving the fifth PE. So I checked 
using
> > pvdisplay that the four PE had indeed been moved. A further
> > attempt with pvmove -v /dev/hda8:260-287 /dev/hda10 fell over 
as
> > well, again error code 23. (That's error moving physical
> > extent(s)). 
> > 
> > I then rebooted the machine which killed the complete volume
> > group. Which in turn did not quite impress me. :-((
> > 
> > During my attempts at recovering I think I did a vgscan, which
> > found the vg "vg1" on /dev/hdb, but not the vg "vg0" on 
/dev/hda.
> > 
> > I tried the following commands unsuccessfully:
> > - vgcfgrestore -v -n vg0: please enter physical volume name 
(the
> >   synopsys of vgcfgrestore did not state it as a mandatory 
> > parameter)
> > - vgcfgrestore -v -n vg0 /dev/hda10: can't restore part of 
active
> > volume grout vg0
> > - vgcfgrestore -v -n vg0 /dev/hda{6,7,8,10}: please enter 
physical
> > volume name
> > - vgchange -a n vg0: volume group vg0 does not exist
> > - vgchange -a y vg0: volume group vg0 does not exist
> > 
> > I still have the file /etc/lvmconf/vg0.conf.cd as well as the
> > devices /dev/vg0/lv0{0,1,2,3} and /dev/vg0/group. (And
> > /dev/vg1/..., but that is working.)
> > 
> > Any suggestions on how I can recover the data would be highly
> > appreciated. I found a reference to a program called
> > uuid_fixer/uuid_editor, but usage was discouraged. Do I have 
to 
> > try it
> > or is there a better way? Does upgrading to a newer version of 
lvm
> > help? Or do I have to update the kernel as well then? And I 
> > definately
> > do _not_ want to kill my other vg as well.
> > 
> > Regards,
> > 
> > Gunther
> > 
> > 
> > 
> > 
> > 
______________________________________________________________________________
> > Die schönsten Ski-Regionen der Alpen - jetzt bei Ferienklick.de
> > http://ferienklick.de/ski/?PP=2-5-100-105-38
> > 
> > 
> > _______________________________________________
> > linux-lvm mailing list
> > linux-lvm at sistina.com
> > http://lists.sistina.com/mailman/listinfo/linux-lvm
> > read the LVM HOW-TO at 
http://www.sistina.com/lvm/Pages/howto.html
> 
> *** Software bugs are stupid.
>     Nevertheless it needs not so stupid people to solve them ***
> 
> 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> 
> Heinz Mauelshagen                                 Sistina 
Software Inc.
> Senior Consultant/Developer                       Am Sonnenhang 
11
>                                                   56242 
Marienrachdorf
>                                                   Germany
> Mauelshagen at Sistina.com                           +49 2626 141200
>                                                        FAX 924446
> 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at 
http://www.sistina.com/lvm/Pages/howto.html 


________________________________________________________________
Keine verlorenen Lotto-Quittungen, keine vergessenen Gewinne mehr! 
Beim WEB.DE Lottoservice: http://tippen2.web.de/?x=13






More information about the linux-lvm mailing list