[Linux-cluster] 4 node GFS cluster sanity check

nick at javacat.f2s.com nick at javacat.f2s.com
Tue Oct 21 07:55:41 UTC 2008


Hi,

RHEL 5.2 32bit kernel 2.6.18-92.1.10.el5PAE
kmod-gfs-0.1.23-5.el5
gfs2-utils-0.1.44-1.el5_2.1
gfs-utils-0.1.17-1.el5
cman-2.0.84-2.el5
kmod-gfs2-PAE-1.92-1.1.el5
kmod-gfs2-1.92-1.1.el5
kmod-gfs-PAE-0.1.23-5.el5
rgmanager-2.0.38-2.el5

I have a 4 node cluster. All I want to use is GFS so that each node can read/write to the same directory. I don't want failover. I want to enable as
few cluster daemons as possible.

Here is my cluster.conf

<?xml version="1.0"?>
<cluster alias="TEST" config_version="14" name="TEST">
	<fence_daemon post_fail_delay="0" post_join_delay="3"/>
	<clusternodes>
		<clusternode name="fintestapp1" nodeid="1" votes="1">
			<fence>
				<method name="dummy"/>
			</fence>
		</clusternode>
		<clusternode name="fintestapp2" nodeid="2" votes="1">
			<fence>
				<method name="dummy"/>
			</fence>
		</clusternode>
		<clusternode name="fintestapp3" nodeid="3" votes="1">
			<fence>
				<method name="dummy"/>
			</fence>
		</clusternode>
		<clusternode name="fintestapp4" nodeid="4" votes="1">
			<fence>
				<method name="dummy"/>
			</fence>
		</clusternode>
	</clusternodes>
	<cman/>
	<fencedevices>
		<fencedevice agent="fence_manual" name="dummy"/>
	</fencedevices>
	<rm>
		<failoverdomains/>
		<resources/>
	</rm>
</cluster>

Here' is the output of cman_tool services:
type             level name       id       state
fence            0     default    00010001 none
[1 3 4]
dlm              1     clvmd      00020001 none
[1 3 4]
dlm              1     GFS1       00040001 none
[1 4]
dlm              1     rgmanager  00010003 none
[1 3 4]

Here is the output of cman_tool status:
Version: 6.1.0
Config Version: 14
Cluster Name: TEST
Cluster Id: 1198
Cluster Member: Yes
Cluster Generation: 496
Membership state: Cluster-Member
Nodes: 7
Expected votes: 6
Total votes: 4
Quorum: 4
Active subsystems: 9
Flags: Dirty
Ports Bound: 0 11 177
Node name: fintestapp4
Node ID: 4
Multicast addresses: 239.192.4.178
Node addresses: 192.168.10.68

As you can see Expected votes is 6 while Total votes is 4 - whats wrong here ?

I would like confirmation that my cluster.conf is adequate please because after a few reboots last week expected votes and total votes give unexpected
results.

If any more info is needed, please ask.

Many thanks,
Nick .








More information about the Linux-cluster mailing list