Configuring Multiple LUN Support

Thomas.vonSteiger at swisscom.com Thomas.vonSteiger at swisscom.com
Tue Aug 3 09:09:36 UTC 2004


Hello,

Are you not using Multipathing with 2 x qla2300 FC Adapters ?
OR are you using Powerpath from EMC or something else ?

regards
Thomas


-----Original Message-----
From: redhat-install-list-bounces at redhat.com
[mailto:redhat-install-list-bounces at redhat.com] On Behalf Of Rick
Stevens
Sent: Wednesday, July 28, 2004 7:14 PM
To: Getting started with Red Hat Linux
Subject: Re: Configuring Multiple LUN Support

Adiel Kader wrote:
> Hi,
> 
>  
> 
> I am heaving problems with the following:
> 
>  
> 
> We have installed AS 3 on a 4 Way IBM x445 server. The server attach
to 
> a FAStT600 disk array vi 2 qla2300 HBA's. The problem that we are 
> experiencing is that we cannot see any disk beyond LUN 0 even though
we 
> have enabled multiple LUN support.
> 
>  
> 
> I would greatly appreciate it if you guys can help me out.

I'll try.  I'm not familiar with the FASt600 arrays, but generally you
must split up the drives in your array as different RAID clusters and
the SAN unit assigns each RAID with a LUN.  Those LUNs should appear as
separate SCSI devices (/dev/sda, /dev/sdb, etc.) under the kernel, which
then must be partitioned (and formatted if they're to be used as
filesystems) just like any other block-addressable storage device (disk,
FLASH, etc.).

We are doing something similar using qla2300 HBAs connected to an EMC
FC4700 SAN and that's the way we see them.  Try "dmesg" to see if you
see anything like this:

scsi2 : QLogic QLA2300 PCI to Fibre Channel Host Adapter: bus 19 device 
1 irq 29        Firmware version:  3.01.18, Driver version 6.05.00

scsi3 : QLogic QLA2300 PCI to Fibre Channel Host Adapter: bus 24 device 
1 irq 27        Firmware version:  3.01.18, Driver version 6.05.00

blk: queue efa85e18, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
   Vendor: DGC       Model: RAID 5            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue efa85c18, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
   Vendor: DGC       Model: RAID 1            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue efa85a18, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
   Vendor: DGC       Model: RAID 1            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue efa85818, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
   Vendor: DGC       Model: RAID 5            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue efa85618, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
scsi(2:0:0:0): Enabled tagged queuing, queue depth 32.
scsi(2:0:0:1): Enabled tagged queuing, queue depth 32.
scsi(2:0:0:2): Enabled tagged queuing, queue depth 32.
scsi(2:0:0:3): Enabled tagged queuing, queue depth 32.
   Vendor: DGC       Model: RAID 5            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue efa85018, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
   Vendor: DGC       Model: RAID 1            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue ef9b6818, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
   Vendor: DGC       Model: RAID 1            Rev: 0849
   Type:   Direct-Access                      ANSI SCSI revision: 04
blk: queue ef9b6618, I/O limit 4294967295Mb (mask 0xffffffffffffffff)
scsi(3:0:0:0): Enabled tagged queuing, queue depth 32.
scsi(3:0:0:4): Enabled tagged queuing, queue depth 32.
scsi(3:0:0:5): Enabled tagged queuing, queue depth 32.
Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0
Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1
Attached scsi disk sdd at scsi2, channel 0, id 0, lun 2
Attached scsi disk sde at scsi2, channel 0, id 0, lun 3
Attached scsi disk sdf at scsi3, channel 0, id 0, lun 0
Attached scsi disk sdg at scsi3, channel 0, id 0, lun 4
Attached scsi disk sdh at scsi3, channel 0, id 0, lun 5

If you notice at the top of the above listing, we have a total of seven
RAID arrays on the EMC (four on one shelf, three on another), and each
of these appear as a LUN.  Also note that one shelf is "owned" by one
HBA, while the second shelf is "owned" by the other HBA.  This is the
way EMC does things (there's two redundant processors in the FC4700 and
you must assign the LUNs to a "primary" processor while the other is
used as a failover, and each processor is connected to the host via
a separate HBA).

In the last seven lines you'll see how the LUNs been set up by the
kernel.  In particular, note the "lun n" bit at the end of each line.
If we look at how they're mounted:

[root at db1 root]# mount
(irrelevant data removed)
/dev/sdf1 on /vol1 type ext3 (rw)
/dev/sdc1 on /vol2 type ext3 (rw)
/dev/sdd1 on /vol3 type ext3 (rw)

you can see that each LUN has been set up with a single partition and
we've mounted them as /vol1, /vol2 and /vol3, with /vol1 on the second
shelf and /vol2 and /vol3 on the first shelf.  The other LUNs aren't
mounted per se as they are used for an Oracle database and Oracle uses
the raw partitions (Oracle prefers doing its own "filesystem" stuff).

Does that help at all?
----------------------------------------------------------------------
- Rick Stevens, Senior Systems Engineer     rstevens at vitalstream.com -
- VitalStream, Inc.                       http://www.vitalstream.com -
-                                                                    -
-    Admitting you have a problem is the first step toward getting   -
-    medicated for it.      -- Jim Evarts (http://www.TopFive.com)   -
----------------------------------------------------------------------


_______________________________________________
Redhat-install-list mailing list
Redhat-install-list at redhat.com
https://www.redhat.com/mailman/listinfo/redhat-install-list
To Unsubscribe Go To ABOVE URL or send a message to:
redhat-install-list-request at redhat.com
Subject: unsubscribe





More information about the Redhat-install-list mailing list