[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Re: Starting up two of three nodes that compose a cluster



David Teigland wrote:
On Thu, Sep 20, 2007 at 11:40:55AM +0200, carlopmart wrote:
Please, any hints??

-------- Original Message --------
Subject: Starting up two of three nodes that compose a cluster
Date: Wed, 19 Sep 2007 14:51:46 +0200
From: carlopmart <carlopmart gmail com>
To: linux clustering <linux-cluster redhat com>

Hi all,

 I have setup a rhel5 based cluster with three nodes. Sometimes i need
to start only two of this three nodes, but cluster services that i
configured doesn't starts (fenced fail). Is it not possible to start up
only two nodes on a three node cluster?? Maybe I need to adjust votes
param to two instead of three??

Could you be more specific about what you run, where, what happens,
what messages you see, etc.

Dave


Yes,

First, I attached my cluster.conf. When /etc/init.d/cman starts, returns an ok, but when I try to mount my gfs partition returns this error:

[root haldir cluster]# service mountgfs start
Mounting GFS filesystems: /sbin/mount.gfs: lock_dlm_join: gfs_controld join error: -22
/sbin/mount.gfs: error mounting lockproto lock_dlm
                                                           [FAILED]
[root haldir cluster]#

And of course any service couldn't start .... And clustat output is:

[root haldir cluster]# clustat
Member Status: Quorate

  Member Name                        ID   Status
  ------ ----                        ---- ------
  thranduil.hpulabs.org                 1 Online
  haldir.hpulabs.org                    2 Online, Local, rgmanager
  elrond.hpulabs.org                    3 Offline

  Service Name         Owner (Last)                   State
  ------- ----         ----- ------                   -----
  service:rsync-svc    (none)                         stopped
  service:wwwsoft-svc  (none)                         stopped
  service:proxy-svc    (none)                         stopped
  service:mail-svc     (none)                         stopped
[root haldir cluster]#


P.D: mountgfs it is a simple script that mounts gfs partitions because gfs script provided by redhat doesn't works with _netdev param under fstab.

--
CL Martinez
carlopmart {at} gmail {d0t} com
<?xml version="1.0" ?>
<cluster alias="XenDomUcluster" config_version="56" name="XenDomUcluster">
	<fence_daemon post_fail_delay="0" post_join_delay="3"/>
	<clusternodes>
		<clusternode name="thranduil.hpulabs.org" nodeid="1" votes="1">
			<fence>
				<method name="1">
					<device name="gnbd-fence" nodename="thranduil.hpulabs.org"/>
				</method>
			</fence>
			<multicast addr="239.192.75.55" interface="eth0"/>
		</clusternode>
		<clusternode name="haldir.hpulabs.org" nodeid="2" votes="1">
			<fence>
				<method name="1">
					<device name="gnbd-fence" nodename="haldir.hpulabs.org"/>
				</method>
			</fence>
			<multicast addr="239.192.75.55" interface="eth0"/>
		</clusternode>
		<clusternode name="elrond.hpulabs.org" nodeid="3" votes="1">
			<fence>
				<method name="1">
					<device name="gnbd-fence" nodename="elrond.hpulabs.org"/>
				</method>
			</fence>
			<multicast addr="239.192.75.55" interface="eth0"/>
		</clusternode>
	</clusternodes>
	<cman expected_votes="2">
		<multicast addr="239.192.75.55"/>
	</cman>
	<fencedevices>
		<fencedevice agent="fence_xvm" name="xen-fence"/>
		<fencedevice agent="fence_gnbd" name="gnbd-fence" servers="deagol.hpulabs.org"/>
		<fencedevice agent="fence_manual" name="manual-fence"/>
	</fencedevices>
	<rm log_facility="local4" log_level="7">
		<failoverdomains>
			<failoverdomain name="FullCluster" ordered="1" restricted="1">
				<failoverdomainnode name="thranduil.hpulabs.org" priority="2"/>
				<failoverdomainnode name="haldir.hpulabs.org" priority="3"/>
				<failoverdomainnode name="elrond.hpulabs.org" priority="1"/>
			</failoverdomain>
			<failoverdomain name="PriCluster" ordered="1" restricted="1">
				<failoverdomainnode name="thranduil.hpulabs.org" priority="2"/>
				<failoverdomainnode name="haldir.hpulabs.org" priority="1"/>
			</failoverdomain>
			<failoverdomain name="SecCluster" ordered="1" restricted="1">
				<failoverdomainnode name="haldir.hpulabs.org" priority="2"/>
				<failoverdomainnode name="elrond.hpulabs.org" priority="1"/>
			</failoverdomain>
			<failoverdomain name="ThrCluster" ordered="1" restricted="1">
				<failoverdomainnode name="thranduil.hpulabs.org" priority="1"/>
				<failoverdomainnode name="elrond.hpulabs.org" priority="2"/>
			</failoverdomain>
		</failoverdomains>
		<resources>
			<ip address="172.25.50.11" monitor_link="1"/>
			<ip address="172.25.50.12" monitor_link="1"/>
			<ip address="172.25.50.13" monitor_link="1"/>
			<ip address="172.25.50.14" monitor_link="1"/>
			<ip address="172.25.50.15" monitor_link="1"/>
			<ip address="172.25.50.16" monitor_link="1"/>
			<ip address="172.25.50.17" monitor_link="1"/>
		</resources>
		<service autostart="1" domain="FullCluster" name="rsync-svc" recovery="relocate">
			<ip ref="172.25.50.12">
				<script file="/data/cfgcluster/etc/init.d/rsyncd" name="rsyncd"/>
			</ip>
		</service>
		<service autostart="1" domain="PriCluster" name="wwwsoft-svc">
			<ip ref="172.25.50.13">
				<script file="/data/cfgcluster/etc/init.d/httpd-mirror" name="httpd-mirror"/>
			</ip>
		</service>
		<service autostart="1" domain="ThrCluster" name="proxy-svc" recovery="relocate">
			<ip ref="172.25.50.14">
				<script file="/data/cfgcluster/etc/init.d/squid" name="squid"/>
			</ip>
		</service>
		<service autostart="1" domain="FullCluster" name="mail-svc" recovery="relocate">
			<ip ref="172.25.50.15">
				<script file="/data/cfgcluster/etc/init.d/postfix-cluster" name="postfix-cluster"/>
			</ip>
		</service>
	</rm>
</cluster>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]