[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS performance test

Hi Ray,
thank for your answer.
We are using GFS1 on a Red Hat 5.4 cluster. GFS filesystem is mounted on /mnt/gfs, and when we created such filesystem we used parameter "-p lock_dlm". Anyway, look at this output :

[root parmenides ~]# gfs_tool getsb /mnt/gfs
  no_addr = 26
  sb_lockproto = lock_dlm
  sb_locktable = hr-pm:gfs01
  no_formal_ino = 24
  no_addr = 24

For you information my cluster.conf file is:

<?xml version="1.0"?>
<cluster config_version="4" name="hr-pm">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternode name="zipi" nodeid="1" votes="1">
<method name="1">
<device modulename="" name="DRAC_heraclito"/>
<clusternode name="zape" nodeid="2" votes="1">
<method name="1">
<device modulename="" name="DRAC_parmenides"/>
<cman expected_votes="1" two_node="1"/>
<fencedevice agent="fence_drac" ipaddr="" login="root" name="DRAC_heraclito" passwd="*****"/> <fencedevice agent="fence_drac" ipaddr="" login="root" name="DRAC_parmenides" passwd="******"/> <fencedevice agent="fence_ipmilan" auth="md5" ipaddr="" login="root" name="IPMILan_heraclito" passwd="*"/> <fencedevice agent="fence_ipmilan" auth="md5" ipaddr="" login="root" name="IPMILan_parmenides" passwd="*"/>
Shared disk is a LUN on a fibber channel SAN.
The most surprising thing is that we have another similar cluster, and there we get "98 locks/sec" always, starting the ping_pong in one or in both nodes. Buf! I'm lost! What is happening?


Date: Wed, 2 Dec 2009 06:58:43 -0800 From: Ray Van Dolson <rvandolson esri com> Subject: Re: [Linux-cluster] GFS performance test To: linux-cluster redhat com Message-ID: <20091202145842 GA16292 esri com> Content-Type: text/plain; charset=us-ascii On Wed, Dec 02, 2009 at 03:53:46AM -0800, frank wrote:
>  Hi,
>  after seeing some posts related to GFS performance, we have decided to
>  test our two-node GFS filesystem with ping_pong program.
>  We are worried about the results.
> > Running the program in only one node, without parameters, we get between
>  800000 locks/sec and 900000 locks/sec
>  Running the program in both nodes over the same file on the shared
>  filesystem, the lock rate did not drop and it is the same in both nodes!
>  What does this mean? Is there any problem with locks ?
> > Just for you info, GFS filesystem is /mnt/gfs and what I run in both
>  nodes is:
> > ./ping_pong /mnt/gfs/tmp/test.dat 3 > > Thanks for your help. >
Wow, that doesn't sound right at all (or at least not consistent with
results I've gotten:)

Can you provide details of your setup, and perhaps your cluster.conf
file?  Have you done any other GFS tuning?  Are we talking GFS1 or

I get in the 3000-5000 locks/sec range with my GFS2 filesystem (using
nodiratime,noatime and reducing the lock limit to 0 from 100 in my
cluster.conf file).

The numbers you provide I'd expect to see on a local filesystem.


Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que està net.
For all your IT requirements visit: http://www.transtec.co.uk

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]