[Linux-cluster] Local Partitions detected as Multipath Device
Elvir Kuric
ekuric at redhat.com
Fri Oct 25 14:50:59 UTC 2013
On 10/25/2013 07:29 AM, Zama Ques wrote:
> Creating a new RAMDISK resolved the issue as suggested by Elvir Kuric
> after blacklisting the local disk in multipath.conf .
>
> ====
> $ df
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/sda8 43112964 19708732 21214204 49% /
> tmpfs 32978736 25836 32952900 1% /dev/shm
> /dev/sda1 198337 66006 122091 36% /boot
> /dev/sda6 10079084 840160 8726924 9% /opt
> /dev/mapper/mpathbp1 51606124 184136 48800552 1% /webdata
> /dev/mapper/mpathcp1 209690016 3414728 206275288 2% /sharedweb
> ===
>
> Other options of setting following parameter " find_multipaths
> yes" did not helped to resolve the issue.
>
> Thanks all for the replies and suggestions.
>
> Thanks
> Zaman
>
>
> On Tuesday, 22 October 2013 4:58 PM, Elvir Kuric <ekuric at redhat.com>
> wrote:
> On 10/22/2013 12:17 PM, Zama Ques wrote:
>> My local partitions are detected as multipath device
>> |$multipath -l
>>
>> mpathb (360014380125d90420000a000003e0000) dm-9 HP,HSV450
>> size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
>> `-+- policy='round-robin 0' prio=0 status=active
>> `- 3:0:0:1 sdb 8:16 active undef running
>> mpatha (3600508b1001c02143bc59c6862d97353) dm-0 HP,LOGICAL VOLUME
>> size=137G features='1 queue_if_no_path' hwhandler='0' wp=rw
>> `-+- policy='round-robin 0' prio=0 status=active
>> `- 0:0:0:1 sda 8:0 active undef running
>> |
>> I added the following lines in multipath.conf so that local
>> partitions are not considered as multipath device
>> |blacklist {
>> wwid 3600508b1001c02143bc59c6862d97353
>> }
>> |
>> Executed the following commands after that .
>> |#service multipathd reload
>>
>> # multipath -F
>> Oct 22 12:10:42 | mpathb: map in use
>> Oct 22 12:10:42 | mpatha: map in use
>> |
>> So , tried rebooting the server , but no luck . Any clues to resolve
>> the issue will be highly appreciated .
>>
>> Thanks
>> Zaman
>>
>>
> hi,
>
> /me wonders what OS ( and version ) is in use? service multipathd
> relaod ... suggest it is something RHEL based ( not 100 % sure )
>
> Check /etc/multipath/* files ( if rhel 6 ) and see is there WWID from
> above, and also check initramfs ( RHEL 6 ) or initrd file ( RHEL 5 )
> from /boot
>
> dobelow
>
> # mkdir /tmp/initrdtest
> # cp /boot/initramfs-$$$$.img ( or initrd ... ) /tmp/initrdtest
>
> unpack it
>
> # cd tmp/initrdtest
>
> # zcat initrd.img | cpio -i
>
> and check what is there for multiapth devices written.
>
> In short, I think you will need to ensure there are not records for
> devices you want to blacklist in /etc/multipath/* ( rhel 6 ) or
> /var/lib/multipath/* ( rhel 5 ) and rebuild initramfs / initrd (
> depending is it RHEL 5 - initrd / RHEL 6 -initramfs ) once these
> records are not present.Rebuilding initrd/initramfs will pick values
> from /etc/multipath/* ( rhel 6 ) or /var/lib/multipath/ ( rhel 5 )
>
> to rebuild initrd/initramfs check docs out there but in short process
> is ( after you remove records for WWID from wwid and bindings files -
> read above )
>
>
> RHEL 5:
>
> # cd /boot
> # cp initrd-$(uname -r).img initrd-$(uname -r).img.backup [ make backup ]
> # mkinitrd -v -f initrd-$(uname -r).img
> # reboot ( to boot to new initramfs )
>
> RHEL 6:
>
> same as above with small corrections ;)
>
>
> # cd /boot
> # cp initramfs-$(uname -r).img initramfs-$(uname -r).img.backup [ make
> backup ]
> # dracut -v -f initramfs-$(uname -r).img
> # reboot ( to boot to new initramfs )
>
>
> good reading
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/DM_Multipath/config_file_blacklist.html
>
>
> hope this helps
>
> Kind regards,
>
> --
> Elvir Kuric, Senior Technical Support Engineer / Red Hat / GSS EMEA /
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
>
Cool, thank you for feedback.If you are Red Hat customer and have valid
contract, use benefit of subscription and next time you have problem
with RHEL open case via Red Hat customer portal https://access.redhat.com
Kind regards,
Elvir Kuric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20131025/c91a32d9/attachment.htm>
More information about the Linux-cluster
mailing list