[rhelv6-list] Issue creating MD volumes on Native Multipath on Redhat 5.x
Amrish Parikh
amrish.parikh at gmail.com
Tue May 31 11:20:48 UTC 2011
Hi,
I am facing very weird issue while creating MD volumes on Native multipath
on Redhat5.x.
The issue is like: As soon as I create md volumes using mdadm --create
command on Native multipath disks, later after reboot those disks goes off
from the multipath configurations.
Below are the descriptions with sample:
i have 3 internal disk say sda, sdb, and sdc, and 4 Multipath SAN disks say
sde, sdf, sdg and sdh and their corresponding multipath disks sdi, sdj, sdk
and sdl
1. I configured multipath on above mentioned disks with backlisting sda and
sdb internal disks. Then output looks like:
Creating multipath.conf
=======================
[root at rh16 /]# multipath -v2
create: sdc_internaldisk (360019b90c965f5000eba30a82232eec8) DELL,PERC 5/i
[size=136G][features=0][hwhandler=0][n/a]
\_ round-robin 0 [prio=1][undef]
\_ 0:2:2:0 sdc 8:32 [undef][ready]
create: san_e_i (360060160a9c21200e61a012e5787e011) DGC,RAID 5
[size=9.0G][features=0][hwhandler=0][n/a]
\_ round-robin 0 [prio=2][undef]
\_ 1:0:1:0 sde 8:64 [undef][ready]
\_ 2:0:0:0 sdi 8:128 [undef][ready]
create: san_f_j (360060160a9c21200749a8c3a5787e011) DGC,RAID 5
[size=9.0G][features=0][hwhandler=0][n/a]
\_ round-robin 0 [prio=2][undef]
\_ 1:0:1:1 sdf 8:80 [undef][ready]
\_ 2:0:0:1 sdj 8:144 [undef][ready]
create: san_g_k (360060160a9c21200544657435787e011) DGC,RAID 5
[size=11G][features=0][hwhandler=0][n/a]
\_ round-robin 0 [prio=2][undef]
\_ 1:0:1:2 sdg 8:96 [undef][ready]
\_ 2:0:0:2 sdk 8:160 [undef][ready]
create: mpath4 (360060160a9c21200980d7b4b5787e011) DGC,RAID 5
[size=11G][features=0][hwhandler=0][n/a]
\_ round-robin 0 [prio=2][undef]
\_ 1:0:1:3 sdh 8:112 [undef][ready]
\_ 2:0:0:3 sdl 8:176 [undef][ready]
[root at rh16 /]# multipath -ll
san_g_k (360060160a9c21200544657435787e011) dm-3 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:2 sdg 8:96 [active][ready]
\_ 2:0:0:2 sdk 8:160 [active][ready]
san_f_j (360060160a9c21200749a8c3a5787e011) dm-2 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:1 sdf 8:80 [active][ready]
\_ 2:0:0:1 sdj 8:144 [active][ready]
san_e_i (360060160a9c21200e61a012e5787e011) dm-1 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:0 sde 8:64 [active][ready]
\_ 2:0:0:0 sdi 8:128 [active][ready]
sdc_internaldisk (360019b90c965f5000eba30a82232eec8) dm-0 DELL,PERC 5/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:2:0 sdc 8:32 [active][ready]
mpath4 (360060160a9c21200980d7b4b5787e011) dm-4 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:3 sdh 8:112 [active][ready]
\_ 2:0:0:3 sdl 8:176 [active][ready]
[root at rh16 /]#
2. I reboot the system, later some outputs:
[root at rh20 /]# multipath -ll
sde: checker msg is "emc_clariion_checker: Logical Unit is unbound or LUNZ"
sdj: checker msg is "emc_clariion_checker: Logical Unit is unbound or LUNZ"
sdd_internaldisk (360024e8080572c00154c076313d985b4) dm-1 DELL,PERC 6/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:3:0 sdd 8:48 [active][ready]
sdc_internaldisk (360024e8080572c00154c0757132ca9bb) dm-0 DELL,PERC 6/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:2:0 sdc 8:32 [active][ready]
san_i_n (36006016059801a005b293544eb82e011) dm-5 DGC,RAID 10
[size=10G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 3:0:1:3 sdi 8:128 [active][ready]
\_ 4:0:1:3 sdn 8:208 [active][ready]
san_h_m (36006016059801a005a293544eb82e011) dm-4 DGC,RAID 10
[size=10G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 3:0:1:2 sdh 8:112 [active][ready]
\_ 4:0:1:2 sdm 8:192 [active][ready]
san_g_l (36006016059801a0059293544eb82e011) dm-3 DGC,RAID 10
[size=10G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 3:0:1:1 sdg 8:96 [active][ready]
\_ 4:0:1:1 sdl 8:176 [active][ready]
san_f_k (36006016059801a0058293544eb82e011) dm-2 DGC,RAID 10
[size=10G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]
\_ 4:0:1:0 sdk 8:160 [active][ready]
[root at rh20 /]#
*NO ISSUES SO FAR*
3. Now creating partitions for MD volumes on san_e_i, san_f_k and san_g_l
[root at rh16 /]# fdisk /dev/mapper/
control san_e_i san_g_k
mpath4 san_f_j sdc_internaldisk
[root at rh16 /]# fdisk /dev/mapper/san_e_i
The number of cylinders for this disk is set to 1174.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/mapper/san_e_i: 9663 MB, 9663676416 bytes
255 heads, 63 sectors/track, 1174 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1174, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1174, default 1174): +3G
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid
argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root at rh16 /]# fdisk /dev/mapper/san_f_j
The number of cylinders for this disk is set to 1174.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/mapper/san_f_j: 9663 MB, 9663676416 bytes
255 heads, 63 sectors/track, 1174 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4): 1
First cylinder (1-1174, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1174, default 1174): +3G
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid
argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root at rh16 /]# fdisk /dev/mapper/san_
san_e_i san_f_j san_g_k
[root at rh16 /]# fdisk /dev/mapper/san_g_k
The number of cylinders for this disk is set to 1435.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/mapper/san_g_k: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1435, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1435, default 1435): +3G
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid
argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root at rh16 /]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 255 2048256 83 Linux
/dev/sda2 256 4079 30716280 83 Linux
/dev/sda3 4080 4716 5116702+ 82 Linux swap / Solaris
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdc: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sde: 9663 MB, 9663676416 bytes
64 heads, 32 sectors/track, 9216 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdf: 9663 MB, 9663676416 bytes
64 heads, 32 sectors/track, 9216 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdg: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdh: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdi: 9663 MB, 9663676416 bytes
64 heads, 32 sectors/track, 9216 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdj: 9663 MB, 9663676416 bytes
64 heads, 32 sectors/track, 9216 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdk: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdl: 11.8 GB, 11811160064 bytes
64 heads, 32 sectors/track, 11264 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
Disk /dev/dm-0: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/dm-1: 9663 MB, 9663676416 bytes
255 heads, 63 sectors/track, 1174 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/dm-1p1 1 366 2939863+ fd Linux raid
autodetect
Disk /dev/dm-2: 9663 MB, 9663676416 bytes
255 heads, 63 sectors/track, 1174 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/dm-2p1 1 366 2939863+ fd Linux raid
autodetect
Disk /dev/dm-3: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/dm-3p1 1 366 2939863+ fd Linux raid
autodetect
Disk /dev/dm-4: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
[root at rh16 /]#
CHECKED here MULTIPATH -LL commands, *NO ISSUES SO FAR*
[root at rh16 /]# multipath -ll
san_g_k (360060160a9c21200544657435787e011) dm-3 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:2 sdg 8:96 [active][ready]
\_ 2:0:0:2 sdk 8:160 [active][ready]
san_f_j (360060160a9c21200749a8c3a5787e011) dm-2 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:1 sdf 8:80 [active][ready]
\_ 2:0:0:1 sdj 8:144 [active][ready]
san_e_i (360060160a9c21200e61a012e5787e011) dm-1 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:0 sde 8:64 [active][ready]
\_ 2:0:0:0 sdi 8:128 [active][ready]
sdc_internaldisk (360019b90c965f5000eba30a82232eec8) dm-0 DELL,PERC 5/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:2:0 sdc 8:32 [active][ready]
mpath4 (360060160a9c21200980d7b4b5787e011) dm-4 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:3 sdh 8:112 [active][ready]
\_ 2:0:0:3 sdl 8:176 [active][ready]
[root at rh16 /]# dmsetup -ls
dmsetup: invalid option -- l
Couldn't process command line.
[root at rh16 /]# dmsetup ls
san_g_k (253, 3)
san_f_j (253, 2)
san_e_i (253, 1)
sdc_internaldisk (253, 0)
mpath4 (253, 4)
[root at rh16 /]#
4. Reboot the system and outputs of multipath -ll
[root at rh16 /]# multipath -ll
san_g_k (360060160a9c21200544657435787e011) dm-3 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:2 sdg 8:96 [active][ready]
\_ 2:0:0:2 sdk 8:160 [active][ready]
san_f_j (360060160a9c21200749a8c3a5787e011) dm-2 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:1 sdf 8:80 [active][ready]
\_ 2:0:0:1 sdj 8:144 [active][ready]
san_e_i (360060160a9c21200e61a012e5787e011) dm-1 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:0 sde 8:64 [active][ready]
\_ 2:0:0:0 sdi 8:128 [active][ready]
sdc_internaldisk (360019b90c965f5000eba30a82232eec8) dm-0 DELL,PERC 5/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:2:0 sdc 8:32 [active][ready]
mpath4 (360060160a9c21200980d7b4b5787e011) dm-4 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:3 sdh 8:112 [active][ready]
\_ 2:0:0:3 sdl 8:176 [active][ready]
[root at rh16 /]# dmsetup ls
san_g_k (253, 3)
san_f_j (253, 2)
san_e_i (253, 1)
sdc_internaldisk (253, 0)
san_e_ip1 (253, 7)
san_f_jp1 (253, 6)
mpath4 (253, 4)
san_g_kp1 (253, 5)
[root at rh16 /]#
*NO ISSUES SO FAR*
5. now creating MD volumes on above created partitions in Native MultiPath
partitions
root at rh16 /]# mdadm --create /dev/md5 --verbose --level=5 --raid-devices=3
/dev/mapper/san_e_i
san_e_i san_e_ip1
[root at rh16 /]# mdadm --create /dev/md5 --verbose --level=5 --raid-devices=3
/dev/mapper/san_e_ip1 /dev/mapper/san_g_kp1
/dev/mapper/san_
san_e_i san_e_ip1 san_f_j san_f_jp1 san_g_k san_g_kp1
[root at rh16 /]# mdadm --create /dev/md5 --verbose --level=5 --raid-devices=3
/dev/mapper/san_e_ip1 /dev/mapper/san_g_kp1
/dev/mapper/san_f_jp1
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: size set to 2939776K
mdadm: array /dev/md5 started.
[root at rh16 /]# mdadm --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Tue May 31 05:46:52 2011
Raid Level : raid5
Array Size : 5879552 (5.61 GiB 6.02 GB)
Used Dev Size : 2939776 (2.80 GiB 3.01 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Tue May 31 05:46:52 2011
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 1% complete
UUID : 232014c0:7996e8d7:e1b2d23a:059eda6f
Events : 0.1
Number Major Minor RaidDevice State
0 253 7 0 active sync /dev/dm-7
1 253 5 1 active sync /dev/dm-5
3 253 6 2 spare rebuilding /dev/dm-6
[root at rh16 /]#
[root at rh16 /]# mdadm --detail /dev/md5
/dev/md5:
Version : 0.90
Creation Time : Tue May 31 05:46:52 2011
Raid Level : raid5
Array Size : 5879552 (5.61 GiB 6.02 GB)
Used Dev Size : 2939776 (2.80 GiB 3.01 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Tue May 31 05:56:01 2011
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 232014c0:7996e8d7:e1b2d23a:059eda6f
Events : 0.2
Number Major Minor RaidDevice State
0 253 7 0 active sync /dev/dm-7
1 253 5 1 active sync /dev/dm-5
2 253 6 2 active sync /dev/dm-6
[root at rh16 /]# mkdir /r5_mp
[root at rh16 /]# mkfs -t ext3 /dev/md5
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
735840 inodes, 1469888 blocks
73494 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1505755136
45 block groups
32768 blocks per group, 32768 fragments per group
16352 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root at rh16 /]#
Now, again checking multipath -ll output
[root at rh16 /]# multipath -ll
san_g_k (360060160a9c21200544657435787e011) dm-3 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:2 sdg 8:96 [active][ready]
\_ 2:0:0:2 sdk 8:160 [active][ready]
san_f_j (360060160a9c21200749a8c3a5787e011) dm-2 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:1 sdf 8:80 [active][ready]
\_ 2:0:0:1 sdj 8:144 [active][ready]
san_e_i (360060160a9c21200e61a012e5787e011) dm-1 DGC,RAID 5
[size=9.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:0 sde 8:64 [active][ready]
\_ 2:0:0:0 sdi 8:128 [active][ready]
sdc_internaldisk (360019b90c965f5000eba30a82232eec8) dm-0 DELL,PERC 5/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:2:0 sdc 8:32 [active][ready]
mpath4 (360060160a9c21200980d7b4b5787e011) dm-4 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:3 sdh 8:112 [active][ready]
\_ 2:0:0:3 sdl 8:176 [active][ready]
[root at rh16 /]#
*NOTE here that MD volumes are not yet mounted*
6. Reboot the system and check multipath -ll output:
[root at rh16 /]# multipath -ll
sdc_internaldisk (360019b90c965f5000eba30a82232eec8) dm-0 DELL,PERC 5/i
[size=136G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 0:2:2:0 sdc 8:32 [active][ready]
mpath4 (360060160a9c21200980d7b4b5787e011) dm-1 DGC,RAID 5
[size=11G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:3 sdh 8:112 [active][ready]
\_ 2:0:0:3 sdl 8:176 [active][ready]
[root at rh16 /]#
YOU CAN SEE FROM ABOVE OUTPUT THAT ALL DISKS on which MD was created goes
off from the multipath configuration, though not touched the multipath.conf
file
Any suggestions/ inputs would be a great help.
Regards
Amrish
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhelv6-list/attachments/20110531/e68a9258/attachment.htm>
More information about the rhelv6-list
mailing list