[Linux-cluster] newbie question about RHCS/GFS stability and performance

Gary Romo garromo at us.ibm.com
Wed Dec 19 16:00:49 UTC 2007


I will be using iozone at a later time.  Can you provide some syntax 
examples of what you are running?
Thanks.

Gary Romo
IBM Global Technology Services
303.458.4415
Email: garromo at us.ibm.com
Pager:1.877.552.9264
Text message: gromo at skytel.com



Kamal Jain <kjain at aurarianetworks.com> 
Sent by: linux-cluster-bounces at redhat.com
12/19/2007 08:11 AM
Please respond to
linux clustering <linux-cluster at redhat.com>


To
"linux-cluster at redhat.com" <linux-cluster at redhat.com>
cc

Subject
[Linux-cluster] newbie question about RHCS/GFS stability and performance






Hi Folks,
 
I just joined this list and am new to Linux clustering.  I?ve setup 
several RHEL4u5 (AS) clusters in our lab to do some performance testing 
with our own applications, but their underpinning is just Red Hat Cluster 
Services and GFS on top of some iSCSI arrays (StoreVault and EqualLogic).
 
When I run IOZONE throughput tests comparing a single, local SAS disk 
(146GB, 2.5?, 10K-RPM) on the onboard Dell PERC 5/i controller versus an 
NFS-mounted volume versus a GFS volume on an iSCSI LUN?the results are not 
particularly surprising, and they actually show the GFS volumes to be the 
best performer in random read and random write, and a strong player 
overall.
 
Our application performance, however, really suffers with GFS.  I have 
seen numerous pointers to GFS performance tuning through the ?gfs_tool 
setttune? parameters, but no clear guidance on what the parameters are and 
how one might know what direction to move them in based on run data.
 
Another thing we?ve noticed after running stress tests with our 
application is that cluster nodes, and the clustering management 
components themselves (like ricci, clustat) and filesystem tools (like df 
and du) start hanging, we get system instability.  Rebooting things clears 
it up.
 
Has anyone else experienced this, and do you have any guidance or advice? 
We?re running the native iSCSI components over a shared GbE connection, 
which I know is not optimal for performance, but I can see that the 
network ports are not even close to heavily used.  Could the software 
iSCSI initiator be contributing to this?
 
Who is using native iSCSI on a simple GbE port as we are and who has 
experience using iSCSI HBAs and/or TOEs and/or Fibre Channel for 
interconnect rather than iSCSI?
 
Thanks for any help or insight you can offer.
 
Cheers,
- K
 
--
Kamal Jain
kjain at aurarianetworks.com
+1 978.893.1098  (office)
+1 978.726.7098  (mobile)
 
Auraria Networks, Inc.
85 Swanson Road, Suite 120
Boxborough, MA  01719
USA
 
www.aurarianetworks.com
 --
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20071219/b0dc99e8/attachment.htm>


More information about the Linux-cluster mailing list