[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: LVM not fit for Fedora Core



On Friday 22 December 2006 09:55am, Daniel Yek wrote:
> At 08:11 AM 12/22/2006, Gilboa Davara wrote:
> >On Fri, 2006-12-22 at 07:50 -0800, John Reiser wrote:
> > > Gilboa Davara wrote:
> > > > Use LVM.
> > > > Trust me.
> > > > You won't be sorry.
> > >
> > > I've been there, done that, and regretted it deeply.
>
> I used LVM for a few years to double up hard drives (in stripes) to
> increase performance. I think it worked well that way, even though I don't
> exactly need that kind of performance most of the time. It was just a
> casual way of trying out new technologies.

Software RAID would have been a little better for that.  Personally, I 
subscribe to the "use the right tool for the job" theory.  RAID for RAID 
things, LVM on top of RAID for manageability.

Still, it does work well for that situation.

> I did stop using it after one of my hard drive started to fail and I plug
> both hard drives into another Fedora Core machine also configured with LVM,
> and I couldn't find a way to boot up that machine with both sets of LVM.
> IIRC, it complained about 2 logical partitions of the same names
> (collision), or something like that. With the extra LVM volume removed, I
> could boot; but with them plugged in, I couldn't boot (the otherwise
> healthy LVM volumes) at all.

Been there.  The problem is that you had two volume groups (VG) with the same 
name.

> Is there a solution for such situation? I admit that I never emailed this
> mailing list for help and perhaps didn't read up enough documentation, even
> though I read up a lot years before that.

Yup.  Really simple fix.  Rename one of the VGs with "vgrename".  Some ways of 
doing this:

1.  Give each machine's VG the same name as the box to begin with at 
installation.  That way, when you get into the situation of wanting to mount 
up the disks from one box in another, there will not be a "conflict" to begin 
with.

2.  Rename the VG on the hosting system (i.e. the box you're putting the disks 
into.  This requires a number of simple steps to complete successfully.

To use vgrename, the entire VG must be offline.  So, boot a rescue environment 
(via CD, PXE, however), skip trying to mount up the disk (or use 
the "nomount" option on the "rescue" boot: line), and run (assuming the old 
name is "vg0" and the new name should be "herold":

# lvm
lvm> vgscan
. . . output omitted . . .
lvm> vgchange -a n vg0
. . . output omitted . . .
lvm> vgrename vg0 herold
. . . output omitted . . .
lvm> vgchange -a y herold
. . . output omitted . . .
lvm> exit

Then, mount up the root LV, the /boot/ partition, and things like /usr/ 
and /dev/:

# mkdir /mnt/sysimage
# mount /dev/herold/root /mnt/sysimage
# mount /dev/sda1 /mnt/sysimage/boot
# mount /dev/herold/usr /mnt/sysimage/usr

Of course, change the names of the devices appropriately.

You can then "chroot" and run "mkinitrd" to fix up the name of the root device 
(because the VG name changed).  Also, don't forget to change the "root=" 
value(s) in grub.conf (menu.lst on any other distribution).  I usually just 
snag the mkinitrd command out of the "/sbin/new-kernel-pkg" script 
(use "rpm -q --scripts kernel-`uname -r`" to see that's what the kernel RPMs 
run).

3.  Rename the VG of the system that you are moving the drive(s) from.  Just 
use a rescue environment, like I just showed.  However, in this case, if you 
are not planing on returning the disks to the other machine (i.e., you're 
replacing them with new ones or rebuilding the system after getting some data 
off the disks, etc.), then you don't need to run mkinitrd or edit the 
grub.conf (or menu.lst for those using these instructions on other distros) 
file.

4.  Disconnect the host systems drive(s), boot with a rescue CD (or PXE, etc.) 
and do what you need to do to the disks.

> I gave up using LVM, partly because after reading it up so much, I was
> still having troubles rescuing data on my failing hard drives. I think
> with  improved tools, it need not be so difficult.
>
> I'm having a lot of problems remembering just which partitions belong to
> which LVM volumes and which I can format to free some partitions out.
> Partition labelling support should be added to be practical. Or maybe I
> should not choose a potentially hard-to-remember partitioning scheme, but
> should use only one LVM per disk and not striped.

The default LV names that anaconda suggests/uses are stupid.  I say that 
because the LV names are meaningless and lead to just this problem.  I have 
*always* named my LVs such as to indicate which part of the filesystem is on 
them.  For example, when I create and LV for /ver/log/, I run (use whatever 
size you like, it's just a placeholder in this example):

# lvcreate -L 256M -n varlog vg_name

Entries in /etc/fstab are extremely easy to read like this.

> Hope I'm not choosing any side here. I'm just hoping the experience with
> LVM, especially when working at the partition level can be improved.

I don't think you are.

LVM is one of the coolest things there is.  Many people don't understand the 
basics of how & why LVM is, simply because they don't know where to get the 
education about it.  Although there is good documentation about how to use 
LVMs commands to get things done, but not much about how to really benefit 
and organize and make decisions about how to use LVM at the system level.

There are only a small handful of the available LVM commands that are needed 
for regular stuff.  They all have common consistent switches for their 
command lines and are very easy to use.
-- 
Lamont Peterson <lamont gurulabs com>
Senior Instructor
Guru Labs, L.C. [ http://www.GuruLabs.com/ ]

NOTE:  All messages from this email address should be digitally signed with my
       0xDC0DD409 GPG key. It is available on the pgp.mit.edu keyserver as
       well as other keyservers that sync with MIT's.

Attachment: pgprzfhax8A8p.pgp
Description: PGP signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]