[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] newbie question about RHCS/GFS stability and performance

I will be using iozone at a later time.  Can you provide some syntax examples of what you are running?

Gary Romo
IBM Global Technology Services
Email: garromo us ibm com
Text message: gromo skytel com

Kamal Jain <kjain aurarianetworks com>
Sent by: linux-cluster-bounces redhat com

12/19/2007 08:11 AM

Please respond to
linux clustering <linux-cluster redhat com>

"linux-cluster redhat com" <linux-cluster redhat com>
[Linux-cluster] newbie question about RHCS/GFS stability and        performance

Hi Folks,
I just joined this list and am new to Linux clustering.  I’ve setup several RHEL4u5 (AS) clusters in our lab to do some performance testing with our own applications, but their underpinning is just Red Hat Cluster Services and GFS on top of some iSCSI arrays (StoreVault and EqualLogic).
When I run IOZONE throughput tests comparing a single, local SAS disk (146GB, 2.5”, 10K-RPM) on the onboard Dell PERC 5/i controller versus an NFS-mounted volume versus a GFS volume on an iSCSI LUN…the results are not particularly surprising, and they actually show the GFS volumes to be the best performer in random read and random write, and a strong player overall.
Our application performance, however, really suffers with GFS.  I have seen numerous pointers to GFS performance tuning through the “gfs_tool setttune” parameters, but no clear guidance on what the parameters are and how one might know what direction to move them in based on run data.
Another thing we’ve noticed after running stress tests with our application is that cluster nodes, and the clustering management components themselves (like ricci, clustat) and filesystem tools (like df and du) start hanging, we get system instability.  Rebooting things clears it up.
Has anyone else experienced this, and do you have any guidance or advice?  We’re running the native iSCSI components over a shared GbE connection, which I know is not optimal for performance, but I can see that the network ports are not even close to heavily used.  Could the software iSCSI initiator be contributing to this?
Who is using native iSCSI on a simple GbE port as we are and who has experience using iSCSI HBAs and/or TOEs and/or Fibre Channel for interconnect rather than iSCSI?
Thanks for any help or insight you can offer.
- K
Kamal Jain
kjain aurarianetworks com
+1 978.893.1098  (office)
+1 978.726.7098  (mobile)
Auraria Networks, Inc.
85 Swanson Road, Suite 120
Boxborough, MA  01719
Linux-cluster mailing list
Linux-cluster redhat com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]