[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [linux-lvm] Recovering from a hard crash

Would I then use LVM striping across md[0-9] to get the same effect?
The reason that this box is configured in such a way is because we want
the redundancing of RAID1 and the speed of RAID0 (hence RAID10).  Will
LVM striping give the same performance lift as using Linux RAID0?

Also, will the device mapper and LVM2 patches work against a Red Hat
kernel and are they stable enough to run in a production environment?

Thanks for your help,

-----Original Message-----
From: Christian Limpach [mailto:chris pin lu] 
Sent: Monday, February 24, 2003 12:49 PM
To: linux-lvm sistina com; Rechenberg, Andrew
Subject: Re: [linux-lvm] Recovering from a hard crash

"Rechenberg, Andrew" <ARechenberg shermanfinancialgroup com> wrote:
> Well, unless I'm reading this wrong, it looks as if /dev/md0 and
> /dev/md10 have the same pvdata for some reason.  /dev/md0 is the first
> part of /dev/md10.  Any ideas as to what's going on and how to resolve
> this issue?

md0 is the beginning of md10 and the LVM metadata is located at the
start of
the PVs.  This is why vgscan/pvscan sees the same PV on md0 and md10.  I
think (untested...) that the ugly quick fix is to "mv /dev/md0
this should work because vgscan/pvscan then won't see the device node.
less ugly fix is to use LVM2 with a devicename filter which excludes
In the long run (and since this is a test system), I'd suggest that you
recreate your VG from PVs on each /dev/md[0-9] instead of creating and
/dev/md10.  This also gives you better control of where your
snapshot-copy-on-write-space will be located (best not on the same


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]