[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [dm-devel] Problem using dm-multipath on first of 2 LUNs on system



I am not using LVM on the system:

[root svwdcvispay01 ~]# vgdisplay
  No volume groups found
[root svwdcvispay01 ~]#

The output of the 2 commands you requested:

[root svwdcvispay01 ~]# dmsetup ls
mpath1  (253, 0)
mpath1p5        (253, 4)
mpath1p3        (253, 3)
mpath1p2        (253, 2)
mpath1p1        (253, 1)
[root svwdcvispay01 ~]# dmsetup table
mpath1: 0 104857600 multipath 1 queue_if_no_path 0 2 1 round-robin 0 1 1
8:48 1000 round-robin 0 1 1 8:16 1000
mpath1p5: 0 55697292 linear 253:0 49158963
mpath1p3: 0 8193150 linear 253:0 40965750
mpath1p2: 0 20482875 linear 253:0 20482875
mpath1p1: 0 20482812 linear 253:0 63
[root svwdcvispay01 ~]#

Unfortunately the above commands only seem to be referencing the mpath1
LUN and make no mention of the first LUN (mpath0).  I hope this helps.

Joe Little
littlej ae com




-----Original Message-----
From: dm-devel-bounces redhat com [mailto:dm-devel-bounces redhat com]
On Behalf Of Chandra Seetharaman
Sent: Tuesday, January 29, 2008 1:57 PM
To: device-mapper development
Subject: RE: [dm-devel] Problem using dm-multipath on first of 2 LUNs on
system

Hi,

Do you use LVM in this machine ? If yes, that would clarify why dm-0
thru dm-5 is used.

Can you send the o/p of "dmsetup ls" and "dmsetup table".

regards,

chandra
On Tue, 2008-01-29 at 12:17 -0500, LittleJ AE com wrote:
> I am not specifically blacklisting the /dev/dm-X devices.  The
> 5 /dev/dm-X devices on my system correspond to the 5 device special
> files in /dev/mapper on the second LUN that dm-multipath is currently
> managing.
> 
>  
> 
> [root svwdcvispay01 ~]# cd /dev/mapper
> 
> [root svwdcvispay01 mapper]# ls -la
> 
> total 0
> 
> drwxr-xr-x  2 root root     160 Jan 29 11:47 .
> 
> drwxr-xr-x  9 root root    4900 Jan 29 11:44 ..
> 
> crw-------  1 root root  10, 63 Jan 29 11:44 control
> 
> brw-rw----  1 root disk 253,  0 Jan 29 11:44 mpath1
> 
> brw-rw----  1 root disk 253,  1 Jan 29 11:44 mpath1p1
> 
> brw-rw----  1 root disk 253,  2 Jan 29 11:44 mpath1p2
> 
> brw-rw----  1 root disk 253,  3 Jan 29 11:44 mpath1p3
> 
> brw-rw----  1 root disk 253,  4 Jan 29 11:44 mpath1p5
> 
> [root svwdcvispay01 mapper]# ls -la /dev/dm*
> 
> brw-r-----  1 root root 253, 0 Jan 29 11:44 /dev/dm-0
> 
> brw-r-----  1 root root 253, 1 Jan 29 11:44 /dev/dm-1
> 
> brw-r-----  1 root root 253, 2 Jan 29 11:44 /dev/dm-2
> 
> brw-r-----  1 root root 253, 3 Jan 29 11:44 /dev/dm-3
> 
> brw-r-----  1 root root 253, 4 Jan 29 11:44 /dev/dm-4
> 
> [root svwdcvispay01 mapper]#
> 
>  
> 
> I believe that dm-multipath is internally blacklisting those devices.
> Notice if I execute `dmsetup remove_all` which removes all
> the /dev/mapper/mpath1* files and then recreate them that dm-multipath
> creates 5 new /dev/dm-* files:
> 
>  
> 
> [root svwdcvispay01 mapper]# dmsetup remove_all
> 
> [root svwdcvispay01 mapper]# ls -la
> 
> total 0
> 
> drwxr-xr-x  2 root root     60 Jan 29 12:09 .
> 
> drwxr-xr-x  9 root root   4900 Jan 29 11:44 ..
> 
> crw-------  1 root root 10, 63 Jan 29 11:44 control
> 
>  
> 
> [root svwdcvispay01 mapper]# ls -la /dev/dm*
> 
> brw-r-----  1 root root 253, 0 Jan 29 11:44 /dev/dm-0
> 
> brw-r-----  1 root root 253, 1 Jan 29 11:44 /dev/dm-1
> 
> brw-r-----  1 root root 253, 2 Jan 29 11:44 /dev/dm-2
> 
> brw-r-----  1 root root 253, 3 Jan 29 11:44 /dev/dm-3
> 
> brw-r-----  1 root root 253, 4 Jan 29 11:44 /dev/dm-4
> 
>  
> 
> [root svwdcvispay01 mapper]# multipath
> 
> create: mpath1 (36005076801810085c8000000000001a2)
> 
> [size=50 GB][features="1 queue_if_no_path"][hwhandler="0"]
> 
> \_ round-robin 0 [prio=50]
> 
>  \_ 1:0:1:1 sdd 8:48 [ready]
> 
> \_ round-robin 0 [prio=10]
> 
>  \_ 1:0:0:1 sdb 8:16 [ready]
> 
>  
> 
> [root svwdcvispay01 mapper]# ls -la
> 
> total 0
> 
> drwxr-xr-x  2 root root     160 Jan 29 12:09 .
> 
> drwxr-xr-x  9 root root    5000 Jan 29 12:09 ..
> 
> crw-------  1 root root  10, 63 Jan 29 11:44 control
> 
> brw-rw----  1 root disk 253,  5 Jan 29 12:09 mpath1
> 
> brw-rw----  1 root disk 253,  6 Jan 29 12:09 mpath1p1
> 
> brw-rw----  1 root disk 253,  7 Jan 29 12:09 mpath1p2
> 
> brw-rw----  1 root disk 253,  8 Jan 29 12:09 mpath1p3
> 
> brw-rw----  1 root disk 253,  9 Jan 29 12:09 mpath1p5
> 
>  
> 
> [root svwdcvispay01 mapper]# ls -la /dev/dm*
> 
> brw-r-----  1 root root 253, 0 Jan 29 11:44 /dev/dm-0
> 
> brw-r-----  1 root root 253, 1 Jan 29 11:44 /dev/dm-1
> 
> brw-r-----  1 root root 253, 2 Jan 29 11:44 /dev/dm-2
> 
> brw-r-----  1 root root 253, 3 Jan 29 11:44 /dev/dm-3
> 
> brw-r-----  1 root root 253, 4 Jan 29 11:44 /dev/dm-4
> 
> brw-r-----  1 root root 253, 5 Jan 29 12:09 /dev/dm-5
> 
> brw-r-----  1 root root 253, 6 Jan 29 12:09 /dev/dm-6
> 
> brw-r-----  1 root root 253, 7 Jan 29 12:09 /dev/dm-7
> 
> brw-r-----  1 root root 253, 8 Jan 29 12:09 /dev/dm-8
> 
> brw-r-----  1 root root 253, 9 Jan 29 12:09 /dev/dm-9
> 
> [root svwdcvispay01 mapper]# 
> 
>  
> 
> Notice also that the blacklist has been automatically extended to
> cover the 5 new /dev/dm-X devices:
> 
>  
> 
> [root svwdcvispay01 mapper]# multipath -v3 | more
> 
> load path identifiers cache
> 
> #
> 
> # all paths in cache :
> 
> #
> 
> 36005076801810085c800000000000194  1:0:0:0 sda 8:0   IBM     /2145
> 
> 36005076801810085c8000000000001a2  1:0:0:1 sdb 8:16 10 [active]
> IBM     /2145
> 
> 36005076801810085c800000000000194  1:0:1:0 sdc 8:32   IBM     /2145
> 
> 36005076801810085c8000000000001a2  1:0:1:1 sdd 8:48 50 [active]
> IBM     /2145
> 
> dm-0 blacklisted
> 
> dm-1 blacklisted
> 
> dm-2 blacklisted
> 
> dm-3 blacklisted
> 
> dm-4 blacklisted
> 
> dm-5 blacklisted
> 
> dm-6 blacklisted
> 
> dm-7 blacklisted
> 
> dm-8 blacklisted
> 
> dm-9 blacklisted
> 
> ...
> 
>  
> 
> When I reboot the server dm-5 through dm-9 are gone and it is again
> using dm-0 through dm-4.
> 
>  
> 
> Joe Little
> 
> littlej ae com
> 
>  
> 
>  
> 
>  
> 
>  
> 
>                                    
> ______________________________________________________________________
> 
> From:dm-devel-bounces redhat com [mailto:dm-devel-bounces redhat com]
> On Behalf Of Gerald Nowitzky
> Sent: Tuesday, January 29, 2008 11:59 AM
> To: device-mapper development
> Subject: Re: [dm-devel] Problem using dm-multipath on first of 2 LUNs
> on system
> 
> 
>  
> 
> try and remove dm-0 to dm-4 from the blacklist. The devices
> in /dev/mapper are links to /dev/dm-x
> 
> 
> (Gerald)
> 
> 
>         ----- Original Message ----- 
>         
>         
>         From: LittleJ AE com 
>         
>         
>         To: dm-devel redhat com 
>         
>         
>         Sent: Tuesday, January 29, 2008 4:42 PM
>         
>         
>         Subject: [dm-devel] Problem using dm-multipath on first of 2
>         LUNs on system
>         
>         
>          
>         
>         
>         I am attempting to implement dm-multipath on a RHEL4U6 server
>         and I have been successful in enabling it on the second of 2
>         LUNs on the server but for whatever reason it will not create
>         the device special files in /dev/mapper for the fist LUN.
>         `multipath -d` shows that it identifies the first LUN and
>         wants to create the mpath0 files:
>         
>          
>         
>         [root svwdcvispay01 etc]# multipath -d
>         
>         create: mpath0 (36005076801810085c800000000000194)
>         
>         [size=12 GB][features="1 queue_if_no_path"][hwhandler="0"]
>         
>         \_ round-robin 0 [prio=50]
>         
>          \_ 1:0:0:0 sda 8:0  [ready]
>         
>         \_ round-robin 0 [prio=10]
>         
>          \_ 1:0:1:0 sdc 8:32 [ready]
>         
>          
>         
>         switchpg: mpath1 (36005076801810085c8000000000001a2)
>         
>         [size=50 GB][features="1 queue_if_no_path"][hwhandler="0"]
>         
>         \_ round-robin 0 [prio=50]
>         
>          \_ 1:0:1:1 sdd 8:48 [active][ready]
>         
>         \_ round-robin 0 [prio=10]
>         
>          \_ 1:0:0:1 sdb 8:16 [active][ready]
>         
>          
>         
>         [root svwdcvispay01 etc]#
>         
>          
>         
>          
>         
>         However `multipath -v3` generates a "set ACT_CREATE: map does
>         not exists" error in the mpath0 section:
>         
>          
>         
>         [root svwdcvispay01 etc]# multipath -v3
>         
>         load path identifiers cache
>         
>         #
>         
>         # all paths in cache :
>         
>         #
>         
>         36005076801810085c800000000000194  1:0:0:0 sda 8:0
>         IBM     /2145
>         
>         36005076801810085c8000000000001a2  1:0:0:1 sdb 8:16 10
>         [active] IBM     /2145
>         
>         36005076801810085c800000000000194  1:0:1:0 sdc 8:32
>         IBM     /2145
>         
>         36005076801810085c8000000000001a2  1:0:1:1 sdd 8:48 50
>         [active] IBM     /2145
>         
>         dm-0 blacklisted
>         
>         dm-1 blacklisted
>         
>         dm-2 blacklisted
>         
>         dm-3 blacklisted
>         
>         dm-4 blacklisted
>         
>         md0 blacklisted
>         
>         ram0 blacklisted
>         
>         ram10 blacklisted
>         
>         ram11 blacklisted
>         
>         ram12 blacklisted
>         
>         ram13 blacklisted
>         
>         ram14 blacklisted
>         
>         ram15 blacklisted
>         
>         ram1 blacklisted
>         
>         ram2 blacklisted
>         
>         ram3 blacklisted
>         
>         ram4 blacklisted
>         
>         ram5 blacklisted
>         
>         ram6 blacklisted
>         
>         ram7 blacklisted
>         
>         ram8 blacklisted
>         
>         ram9 blacklisted
>         
>         ===== path info sda (mask 0x1f) =====
>         
>         bus = 1
>         
>         dev_t = 8:0
>         
>         size = 25165824
>         
>         vendor = IBM
>         
>         product = 2145
>         
>         rev = 0000
>         
>         h:b:t:l = 1:0:0:0
>         
>         tgt_node_name = 0x50050768010010b9
>         
>         serial = 020060402172XX00
>         
>         path checker = tur (controler setting)
>         
>         state = 2
>         
>         getprio = /sbin/mpath_prio_alua /dev/%n (controler setting)
>         
>         prio = 50
>         
>         uid = 36005076801810085c800000000000194 (cache)
>         
>         ===== path info sdb (mask 0x1f) =====
>         
>         bus = 1
>         
>         dev_t = 8:16
>         
>         size = 104857600
>         
>         vendor = IBM
>         
>         product = 2145
>         
>         rev = 0000
>         
>         h:b:t:l = 1:0:0:1
>         
>         tgt_node_name = 0x50050768010010b9
>         
>         serial = 020060402172XX00
>         
>         path checker = tur (controler setting)
>         
>         state = 2
>         
>         getprio = /sbin/mpath_prio_alua /dev/%n (controler setting)
>         
>         prio = 10
>         
>         uid = 36005076801810085c8000000000001a2 (cache)
>         
>         ===== path info sdc (mask 0x1f) =====
>         
>         bus = 1
>         
>         dev_t = 8:32
>         
>         size = 25165824
>         
>         vendor = IBM
>         
>         product = 2145
>         
>         rev = 0000
>         
>         h:b:t:l = 1:0:1:0
>         
>         tgt_node_name = 0x5005076801001082
>         
>         serial = 020060402172XX00
>         
>         path checker = tur (controler setting)
>         
>         state = 2
>         
>         getprio = /sbin/mpath_prio_alua /dev/%n (controler setting)
>         
>         prio = 10
>         
>         uid = 36005076801810085c800000000000194 (cache)
>         
>         ===== path info sdd (mask 0x1f) =====
>         
>         bus = 1
>         
>         dev_t = 8:48
>         
>         size = 104857600
>         
>         vendor = IBM
>         
>         product = 2145
>         
>         rev = 0000
>         
>         h:b:t:l = 1:0:1:1
>         
>         tgt_node_name = 0x5005076801001082
>         
>         serial = 020060402172XX00
>         
>         path checker = tur (controler setting)
>         
>         state = 2
>         
>         getprio = /sbin/mpath_prio_alua /dev/%n (controler setting)
>         
>         prio = 50
>         
>         uid = 36005076801810085c8000000000001a2 (cache)
>         
>         #
>         
>         # all paths :
>         
>         #
>         
>         36005076801810085c800000000000194  1:0:0:0 sda 8:0 50 [ready]
>         IBM     /2145
>         
>         36005076801810085c8000000000001a2  1:0:0:1 sdb 8:16 10
>         [active][ready] IBM
>         
>         36005076801810085c800000000000194  1:0:1:0 sdc 8:32 10 [ready]
>         IBM     /2145
>         
>         36005076801810085c8000000000001a2  1:0:1:1 sdd 8:48 50
>         [active][ready] IBM
>         
>         params = 1 queue_if_no_path 0 2 1 round-robin 0 1 1 8:48 1000
>         round-robin 0 1 1 8:16 1000
>         
>         status = 1 0 0 2 1 A 0 1 0 8:48 A 0 E 0 1 0 8:16 A 0
>         
>         Found matching wwid [36005076801810085c800000000000194] in
>         bindings file.
>         
>         Setting alias to mpath0
>         
>         pgpolicy = group_by_prio (controler setting)
>         
>         selector = round-robin 0 (internal default)
>         
>         features = 1 queue_if_no_path (controler setting)
>         
>         hwhandler = 0 (controler setting)
>         
>         rr_weight = 1 (internal default)
>         
>         rr_min_io = 1000 (config file default)
>         
>         no_path_retry = NONE (internal default)
>         
>         pg_timeout = NONE (internal default)
>         
>         0 25165824 multipath 1 queue_if_no_path 0 2 1 round-robin 0 1
>         1 8:0 1000 round-robin 0 1 1 8:32 1000
>         
>         set ACT_CREATE: map does not exists
>         
>         Found matching wwid [36005076801810085c8000000000001a2] in
>         bindings file.
>         
>         Setting alias to mpath1
>         
>         pgpolicy = group_by_prio (controler setting)
>         
>         selector = round-robin 0 (internal default)
>         
>         features = 1 queue_if_no_path (controler setting)
>         
>         hwhandler = 0 (controler setting)
>         
>         rr_weight = 1 (internal default)
>         
>         rr_min_io = 1000 (config file default)
>         
>         no_path_retry = NONE (internal default)
>         
>         pg_timeout = NONE (internal default)
>         
>         0 104857600 multipath 1 queue_if_no_path 0 2 1 round-robin 0 1
>         1 8:48 1000 round-robin 0 1 1 8:16 1000
>         
>         set ACT_NOTHING: map unchanged
>         
>         [root svwdcvispay01 etc]#
>         
>          
>         
>          
>         
>         And notice that no /dev/mapper/mpath0 device special files
>         were created:
>         
>          
>         
>         [root svwdcvispay01 etc]# ls -la /dev/mapper
>         
>         total 0
>         
>         drwxr-xr-x  2 root root     160 Jan 29 09:53 .
>         
>         drwxr-xr-x  9 root root    4900 Jan 29 09:51 ..
>         
>         crw-------  1 root root  10, 63 Jan 21 16:15 control
>         
>         brw-rw----  1 root disk 253,  0 Jan 21 16:15 mpath1
>         
>         brw-rw----  1 root disk 253,  1 Jan 21 16:15 mpath1p1
>         
>         brw-rw----  1 root disk 253,  2 Jan 21 16:15 mpath1p2
>         
>         brw-rw----  1 root disk 253,  3 Jan 21 16:15 mpath1p3
>         
>         brw-rw----  1 root disk 253,  4 Jan 21 16:15 mpath1p5
>         
>         [root svwdcvispay01 etc]#
>         
>          
>         
>         My /etc/multipath.conf file is basic but I have tried creating
an alias entry that looks like "multipaths {multipath {wwid
36005076801810085c800000000000194 alias os}}" for the first LUN but the
results are the same, it still won't create any device special files for
the first LUN.
>         
>          
>         
>         [root svwdcvispay01 mapper]# cat /etc/multipath.conf | grep -v
>         "#"
>         
>          
>         
>         defaults {
>         
>                 user_friendly_names yes
>         
>         }
>         
>          
>         
>         [root svwdcvispay01 mapper]#
>         
>          
>         
>         I have been working this issue for over a week with IBM and
>         RedHat tech support and they have been unhelpful.  I need to
>         get this functional because the server in question has been
>         unstable using a single path to the SAN due to stability
>         issues we have had periodically with one of the SAN Volume
>         Controllers.  If additional information is needed to
>         troubleshoot this please let me know.
>         
>          
>         
>         Joe little
>         
>         littlej ae com
>         
>          
>         
>                                        
>         ______________________________________________________________
>         
>         --
>         dm-devel mailing list
>         dm-devel redhat com
>         https://www.redhat.com/mailman/listinfo/dm-devel
>         
> --
> dm-devel mailing list
> dm-devel redhat com
> https://www.redhat.com/mailman/listinfo/dm-devel
-- 

----------------------------------------------------------------------
    Chandra Seetharaman               | Be careful what you choose....
              - sekharan us ibm com   |      .......you may get it.
----------------------------------------------------------------------


--
dm-devel mailing list
dm-devel redhat com
https://www.redhat.com/mailman/listinfo/dm-devel



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]