[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Poor LVM performance.



Hi,

I have been asking around about this for a while. I got the same results with CLVM with an iSCSI box I had on loan.

I have been doing some testing with KVM and Virtuozzo(containers based virtualisation)  and various storage devices and have some results I would like some help analyzing.  I have a nice big ZFS box from Oracle (Yes, evil but Solaris NFS is amazing). I have 10G and IB connecting these to my cluster. My cluster is four HP servers (E5-2670 & 144GB ram) with a RAID10 of 600k SAS drives. 

Please open these pictures side by side.

https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-12-04%20at%202.50.33%20PM.png
https://dl.dropbox.com/u/98200887/Screen%20Shot%202012-12-04%20at%203.18.03%20PM.png

You will notice that using KVM/LVM on the local RAID10 (and CLVM on iSCSI) completely destroys performance whereas the container based virtualisation stuff is awesome and as fast as the NFS.

4,8,12,16...VMs relates to the aggregate performance of the benchmark in that number of VMs. 4 = 1 VM on each node, 8 = 2 VM on each node. TPCC warehouses is the number of tpcc warehouses that  the benchmark used. 1 warehouse is about 150MB so 10 warehouses would mean about 1.5GB of data being held in the innodb pool.

Why does LVM performance suck so hard compared to a single filesystem approach. What am I doing wrong? 

Thanks,

Andrew


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]