I would like set up a RAID devices on top of SAS disks shared by two hosts via external SAS cable.
SAS disk are plug on each host.
Each host has its own disk controller meaning that each host can access independtly both SAS disks (the local one + the
SAS disk have independent physical I/O that permits accessing disks indepently of the failure/reset of each host.
I envision to use md to set up two RAID-1 devices one for the Linux root FS and another one for the other file systems.
I understand from the RAID HOWTO manual that it is quite easy with 2.4 to set up a root file
system on top of RAID device.
I guess that both host in my configuration will try to boot from the same primary disk ?
What if the external SAS cable are loss meaning that the RAID-1 device is broken. Does each host will try to boot from its own "local" root
file system ? What if the RAID device is set up in auto detection mode (does each host will declare the other disk has failed ) ?
I understand from the LVM section here below that LVM is the convenient way to partition a RAID device into multiple
resizable partitions. However, I wonder wether the md resync protocol latency is strongly dependent on the
size of the disk partition. In other words, what would the impact on the resync latency to set up one single large RAID-1 device
instead of multiple small ones with different read/write patterns ?
Architect GSM/UMTS Platform
Tel. : (33) 1 69 55 59 13 / ESN : 574 5913
Email: plaindav nortel com
7.3 Booting on RAID
There are several ways to set up a system that mounts it's root filesystem on a RAID device. Some
distributions allow for RAID setup in the installation process, and this is by far the easiest way to get a nicely
set up RAID system.
Newer LILO distributions can handle RAID-1 devices, and thus the kernel can be loaded at boot-time from a
RAID device. LILO will correctly write boot-records on all disks in the array, to allow booting even if the
primary disk fails.
11.2 LVM on RAID
The solution to the partitioning problem is LVM, Logical Volume Management. LVM has been in the stable
Linux kernel series for a long time now - LVM2 in the 2.6 kernel series is a further improvement over the
older LVM support from the 2.4 kernel series. While LVM has traditionally scared some people away because
of its complexity, it really is something that an administrator could and should consider if he wishes to use
more than a few filesystems on a server.
We will not attempt to describe LVM setup in this HOWTO, as there already is a fine HOWTO for exactly
this purpose. A small example of a RAID + LVM setup will be presented though. Consider the df output
below, of such a system: