[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: [Consult-list] Re: [dm-devel] dm-multipath has greatthroughput but we'd like more!



Interessting Discussion!

If you are running in a big enterprise SAN then it's possible that your
server shares the HDS Port with 30 other servers.

I have done
"bonnie++ -d /iotest -s 6g -f -n 0 -u root" on AMD LS20 IBM Blade
2x2Gb's qla HBA's / 3Gb Mem
and
"bonnie++ -d /iotest -s 8g -f -n 0 -u root" on Intel HS20 IBM Blade /
2x2Gb's qla HBA's / 4Gb Mem.
SAN Storage (HDS USP100) with dm-multipath (failover and multibus) for
ext3 and ext2.
OS are RHEL4/U3.

Results are in the att bonnue1.html

Defaults from /etc/multipath.conf:
defaults {
   udev_dir                /dev
   polling_interval        10
   selector                "round-robin 0"
   default_path_grouping_policy   multibus
   getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
   prio_callout            /bin/true
   path_checker            readsector0
   rr_min_io               100
   rr_weight               priorities
   failback                immediate
   no_path_retry           20
   user_friendly_name      yes
}

Thomas



-----Original Message-----
From: dm-devel-bounces redhat com [mailto:dm-devel-bounces redhat com]
On Behalf Of Nicholas C. Strugnell
Sent: Thursday, May 18, 2006 11:43 AM
To: rgautier redhat com
Cc: device-mapper development; consult-list redhat com
Subject: Re: [Consult-list] Re: [dm-devel] dm-multipath has
greatthroughput but we'd like more!

On Thu, 2006-05-18 at 10:04 +0200, Nicholas C. Strugnell wrote: 
> On Thu, 2006-05-18 at 08:44 +0100, Bob Gautier wrote:
> > On Thu, 2006-05-18 at 02:25 -0500, Jonathan E Brassow wrote:
> > > The system bus isn't a limiting factor is it?  64-bit PCI-X will 
> > > get
> > > 8.5 GB/s (plenty), but 32-bit PCI 33MHz got 133MB/s.
> > > 
> > > Can your disks sustain that much bandwidth? 10 striped drives 
> > > might get better than 200MB/s if done right, I suppose.
> > > 
> 

> It might make sense to test raw writes to a device with dd and see if 
> that gets comparable performance figures - I'll just try that myself 
> actually.

write throughput to EVA 8000 (8GB write cache), host DL380 with 2x2Gb/s
HBAs, 2GB RAM

testing 4GB files:

on filesystems: bonnie++ -d /mnt/tmp -s 4g -f -n 0 -u root

ext3: 129MB/s sd=0.43

ext2: 202MB/s sd=21.34
q
on raw: 216MB/s sd=3.93  (dd if=/dev/zero
of=/dev/mpath/3600508b4001048ba0000b00001400000 bs=4k count=1048576)


NB I did not have exclusive access to the SAN or this particular storage
array - this is a big corp. SAN network under quite heavy load and disk
array under moderate load - not even sure if I had exclusive access to
the disks. All values averaged over 20 runs. 

The very low deviation of write speed on ext3 vs. exr2 or raw is
interesting - not sure if it means anything.

In any case, we don't manage to get very close to the theoretical
throughput of the 2 HBAs, 512MB/s

Nick



-- 
M: +44 (0)7736 665171           Skype: nstrug
http://europe.redhat.com
GPG FPR: 9C6C 093C 756A 6C57 49A1  E211 BBBA F5F5 C440 5DE0

Title: Bonnie++ V1.03 Benchmark results

Sequential Output Sequential Input Random
Seeks

Size:Chunk Size Per Char Block Rewrite Per Char Block

K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU / sec % CPU
i686/ext3/dm-multipath/failover 8G

147138 67 57894 18

122365 14 1939.0 5
i686/ext2/dm-multipath/failover 8G

192642 38 54684 14

66696 8 346.6 0
i686/ext3/dm-multipath/multibus 8G

203769 92 56409 18

85827 11 678.3 1
i686/ext2/dm-multipath/multibus 8G

325148 68 60266 16

101498 12 709.1 1
x86_64/ext3/dm-multipath/failover 6G

182716 56 58931 12

117738 11 971.3 1
x86_64/ext2/dm-multipath/failover 6G

200090 27 68081 13

114219 12 903.5 1
x86_64/ext3/dm-multipath/multibus 6G

250334 88 68861 16

117787 12 916.3 1
x86_64/ext2/dm-multipath/multibus 6G

363323 48 69560 13

108879 11 828.8 1

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]