[linux-lvm] Recovering from a hard crash

Rechenberg, Andrew ARechenberg at shermanfinancialgroup.com
Mon Feb 24 13:16:01 UTC 2003


Would I then use LVM striping across md[0-9] to get the same effect?
The reason that this box is configured in such a way is because we want
the redundancing of RAID1 and the speed of RAID0 (hence RAID10).  Will
LVM striping give the same performance lift as using Linux RAID0?

Also, will the device mapper and LVM2 patches work against a Red Hat
kernel and are they stable enough to run in a production environment?

Thanks for your help,
Andy.

-----Original Message-----
From: Christian Limpach [mailto:chris at pin.lu] 
Sent: Monday, February 24, 2003 12:49 PM
To: linux-lvm at sistina.com; Rechenberg, Andrew
Subject: Re: [linux-lvm] Recovering from a hard crash


"Rechenberg, Andrew" <ARechenberg at shermanfinancialgroup.com> wrote:
> Well, unless I'm reading this wrong, it looks as if /dev/md0 and
> /dev/md10 have the same pvdata for some reason.  /dev/md0 is the first
> part of /dev/md10.  Any ideas as to what's going on and how to resolve
> this issue?

md0 is the beginning of md10 and the LVM metadata is located at the
start of
the PVs.  This is why vgscan/pvscan sees the same PV on md0 and md10.  I
think (untested...) that the ugly quick fix is to "mv /dev/md0
/dev/notmd0",
this should work because vgscan/pvscan then won't see the device node.
The
less ugly fix is to use LVM2 with a devicename filter which excludes
md0.
In the long run (and since this is a test system), I'd suggest that you
recreate your VG from PVs on each /dev/md[0-9] instead of creating and
using
/dev/md10.  This also gives you better control of where your
snapshot-copy-on-write-space will be located (best not on the same
disks...).

    christian





More information about the linux-lvm mailing list