[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [dm-devel] Shell Scripts or Arbitrary Priority Callouts?



On Mar 22, 2009, at 8:27, Pasi Kärkkäinen <pasik iki fi> wrote:

On Fri, Mar 20, 2009 at 06:01:23AM -0400, John A. Sullivan III wrote:

John:

Thanks for the reply.

I ended up writing a small C program to do the priority computation for me.

I have two sets of FC-AL shelves attached to two dual-channel Qlogic
cards. That gives me two paths to each disk. I have about 56 spindles
in the current configuration, and am tying them together with md
software raid.

Now, even though each disk says it handles concurrent I/O on each
port, my testing indicates that throughput suffers when using multibus by about 1/2 (from ~60 MB/sec sustained I/O with failover to 35 MB/ sec
when using multibus).

However, with failover, I am effectively using only one channel on
each card. With my custom priority callout, I more or less match the
disks with even numbers to the even numbered scsi channels with a
higher priority. Same with the odd numbered disks and odd numbered
channels. The odds are 2ndary on even and vice versa. It seems to work
rather well, and appears to spread the load nicely.

Thanks again for your help!

I'm really glad you brought up the performance problem. I had posted
about it a few days ago but it seems to have gotten lost. We are really struggling with performance issues when attempting to combine multiple
paths (in the case of multipath to one big target) or targets (in the
case of software RAID0 across several targets) rather than using, in
effect, JBODs.  In our case, we are using iSCSI.

Like you, we found that using multibus caused almost a linear drop in
performance. Round robin across two paths was half as much as aggregate
throughput to two separate disks, four paths, one fourth.

We also tried striping across the targets with software RAID0 combined
with failover multipath - roughly the same effect.

We really don't want to be forced to treated SAN attached disks as
JDOBs. Has anyone cracked this problem of using them in either multibus or RAID0 so we can present them as a single device to the OS and still load balance multiple paths. This is a HUGE problem for us so any help
is greatly appreciated.  Thanks- John

Hello.

Hmm.. just a guess, but could this be related to the fact that if your paths to the storage are different iSCSI sessions (open-iscsi _doesn't_ support multiple connections per session aka MC/s), then there is a separate SCSI command queue per path.. and if SCSI requests are split across those queues
they can get out-of-order and that causes performance drop?

See:
http://www.nabble.com/round-robin-with-vmware-initiator-and-iscsi-target-td21958346.html

Especially the reply from Ross (CC). Maybe he has some comments :)

-- Pasi

--
dm-devel mailing list
dm-devel redhat com
https://www.redhat.com/mailman/listinfo/dm-devel

My problem with dm multipath multibus running at half the speed of failover is not with iscsi but with some fibre channel disk shelves I'm treating as a JBOD. I have two loops, and each FC drive is capable of doing concurrent io through both ports.

I am exporting them via iscsi but the performance dropoff I'm seeing is on the local host with the HBAs.

cc


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]