[Consult-list] Re: [dm-devel] dm-multipath has great throughput but we'd like more!

Ed Wilts ewilts at ewilts.org
Mon May 22 15:31:00 UTC 2006


On Thu, May 18, 2006 at 11:42:36AM +0200, Nicholas C. Strugnell wrote:
> write throughput to EVA 8000 (8GB write cache), host DL380 with 2x2Gb/s
> HBAs, 2GB RAM
> 
> testing 4GB files:
> 
> on filesystems: bonnie++ -d /mnt/tmp -s 4g -f -n 0 -u root
> 
> ext3: 129MB/s sd=0.43
> 
> ext2: 202MB/s sd=21.34
> 
> on raw: 216MB/s sd=3.93  (dd if=/dev/zero of=/dev/mpath/3600508b4001048ba0000b00001400000 bs=4k count=1048576)
> 
> 
> NB I did not have exclusive access to the SAN or this particular storage
> array - this is a big corp. SAN network under quite heavy load and disk
> array under moderate load - not even sure if I had exclusive access to
> the disks. All values averaged over 20 runs. 

Since I manage a half-dozen EVAs, I'll pretend I actually know something
about them :-).  First, there are multiple ways of setting up the LUNs
on the frame - anywhere from a small LUN with RAID5 to a large LUN with
raid 0.  The differences should be significant.  A small RAID5 LUN will
give you very limited balancing across physical disks.  Because of the
virtualization of the disks within the frame, you most definitely do not
have exclusive access to the physical disks.  It's quite possible that
your raid 5 partition is on the same physical disk as a very busy
database.  The EVA spreads the lun across multiple spindles - the larger
the lun, the more spindles you can get working for you.

If you can, get the storage group to assign you a large raid 0 lun and
redo your tests.  You should see different results.

        .../Ed

-- 
Ed Wilts, RHCE
Mounds View, MN, USA
mailto:ewilts at ewilts.org
Member #1, Red Hat Community Ambassador Program




More information about the dm-devel mailing list