[linux-lvm] Misleading documentation (was: HDD Failure)

Fabien Jakimowicz fabien at jakimowicz.com
Wed Sep 20 11:27:23 UTC 2006


On Wed, 2006-09-20 at 03:22 +0000, Mark Krenz wrote:
>   Personally I like it when documentation is kept simple and uses simple
> examples.  There is nothing worse than when you are trying to learn
> something and it tells you how to how to intantiate a variable, and then
> it immediately goes on to show you how to make some complicated reference
> to it using some code.
> 
>   I agree with you though, its probably a good idea to steer newcomers
> in the right direction on disk management and a few notes about doing
> LVM ontop of RAID being a good idea couldn't hurt.  This is especially
> so since I've heard three mentions of people using LVM on a server
> without doing RAID this week alone. :-/
We should add something in faq page
( http://tldp.org/HOWTO/LVM-HOWTO/lvm2faq.html ), like "i've lost one of
my hard drive and i can't mount my lv, did i lost everything ?" followed
by a quick explanation : lvm is NOT faulty tolerant like raid1/5, if you
lose a PV, you lose every LV which was (even partially) on on it.

Adding a recipe with raid can't kill us.

A. Setting up LVM over software RAID on four disks

For this recipe, the setup has four disks that will be put into two raid
arrays which will be used as PV.
The main goal of this configuration is to avoid any data loss if one of
the hard drives fails.

A.1 RAID

A.1.1 Preparing the disks

You must partition your disks and set partition type to Linux raid
autodetect (FD type), if your system can handle it. I recommand to make
only one partition per hard drive. You can use cfdisk to do it.

# cfdisk /dev/sda

If your drives are identicals, you can save time using sfdisk :

# sfdisk -d /dev/sda | sfdisk /dev/sdb
# sfdisk -d /dev/sda | sfdisk /dev/sdc
# sfdisk -d /dev/sda | sfdisk /dev/sdd

This will partition sdb, sdc and sdd using sda partition table scheme.

A.1.2 Creating arrays

You can check if your system can handle raid1 by typing the following :

# cat /proc/mdstat
Personalities : [raid1]

If not (file not found, no raid1 in list), then load raid1 module :

# modprobe raid1

You can now create raid arrays, assuming you have only one raid partiton
per drive :

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

You should wait for the raid arrays to be fully synchronized, check it
with :

# cat /proc/mdstat

A.2 LVM

A.2.1 Create Physical Volumes

Run pvcreate on each raid array :

# pvcreate /dev/md0
# pvcreate /dev/md1

This creates a volume group descriptor area (VGDA) at the start of the
raid arrays.

A.2.2 Setup a Volume Group

# vgcreate my_volume_group /dev/md0 /dev/md1

You should now see something like that :

# vgdisplay
  --- Volume group ---
  VG Name               my_volume_group
  System ID             
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  37
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               1.18 TB
  PE Size               4.00 MB
  Total PE              310014
  Alloc PE / Size       0 / 0 TB
  Free  PE / Size       310014 / 1.18 TB
  VG UUID               LI3k9v-MnIA-lfY6-kdAB-nmpW-adjX-A5yKiF

You should check if 'VG Size' matchs your hard drive size (raid1 divide
available space by two, so if you have four 300Go hard drives, you will
have a ~600Go VG).

A.2.3 Create Logical Volumes

You can now create some LV on your VG :

# lvcreate -L10G -nmy_logical_volume my_volume_group
  Logical volume "my_logical_volume" created
# lvcreate -L42G -nmy_cool_lv my_volume_group
  Logical volume "my_cool_lv" created

A.2.4 Create the File System

Create an XFS file system on each logical volume :

# mkfs.xfs /dev/my_volume_group/my_logical_volume
meta-data=/dev/my_volume_group/my_logical_volume isize=256
agcount=16, agsize=163840 blks
         =                       sectsz=512  
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=2560, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

# mkfs.xfs /dev/my_volume_group/my_cool_lv
meta-data=/dev/my_volume_group/my_cool_lv isize=256    agcount=16,
agsize=688128 blks
         =                       sectsz=512  
data     =                       bsize=4096   blocks=11010048,
imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096  
log      =internal log           bsize=4096   blocks=5376, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

A.2.5 Test File System

Mount logical volumes and check if everything is fine :

# mkdir -p /mnt/{my_logical_volume,my_cool_lv}
# mount /dev/my_volume_group/my_logical_volume /mnt/my_logical_volume
# mount /dev/my_volume_group/my_cool_lv /mnt/my_cool_lv
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1             1.9G   78M  1.8G   5% /
/dev/mapper/my_volume_group-my_logical_volume
                       10G     0   10G   0% /mnt/my_logical_volume
/dev/mapper/my_volume_group-my_cool_lv
                       42G     0   42G   0% /mnt/my_cool_lv

> 
>   I suppose it could also be said that people who are causually doing
> LVM on their systems using something like a GUI are most likely not
> going to be referencing the man pages or LVM documentation until after
> their system is setup, at which point it is probably too late to put the
> physical volumes on a RAID array.  So I think its more the
> responsibility of the GUI/ncurses installer to alert you to be using
> RAID.
> 
> Mark
> 
> 
> On Tue, Sep 19, 2006 at 10:40:43PM GMT, Scott Lamb [slamb at slamb.org] said the following:
> > On Sep 18, 2006, at 12:37 PM, Mark Krenz wrote:
> > >  LVM != RAID
> > >
> > >  You should have been doing RAID if you wanted to be able to  
> > >handle the
> > >failure of one drive.
> > 
> > This is my biggest beef with LVM - why doesn't *any* of the  
> > documentation point this out? There are very few good reasons to use  
> > LVM without RAID, and "ignorance" certainly isn't among them. I don't  
> > see any mention of RAID or disk failures in the manual pages or in  
> > the HOWTO.
> > 
> > For example, the recipes chapter [1] of the HOWTO shows a non-trivial  
> > setup with four volume groups split across seven physical drives.  
> > There's no mention of RAID. This is a ridiculously bad idea - if  
> > *any* of those seven drives are lost, at least one volume group will  
> > fail. In some cases, more than one. This document should be showing  
> > best practices, and it's instead showing how to throw away your data.
> > 
> > The "lvcreate" manual page is pretty bad, too. It mentions the  
> > ability to tune stripe size, which on casual read, might suggest that  
> > it uses real RAID. Instead, I think this is just RAID-0.
> > 
> > [1] - http://tldp.org/HOWTO/LVM-HOWTO/recipeadddisk.html
> > 
-- 
Fabien Jakimowicz <fabien at jakimowicz.com>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20060920/911200c7/attachment.sig>


More information about the linux-lvm mailing list