[Linux-cluster] permanently removing node from running cluster
Martin Waite
Martin.Waite at datacash.com
Mon Jun 21 09:14:17 UTC 2010
Hi,
Is it possible to permanently remove a node from a running cluster ?
All my attempts result in the node being in the state "offline,
estranged", and the node still counting as a member in the "Nodes: "
count from cman_tool status ( but not in the "Expected votes:" count -
so I think the quorum size is correct).
It appears that the only way to permanently remove references to a node
is to restart cman on the surviving nodes.
My procedure for removing the node is:
1. relocate any services running on the node
2. edit cluster.conf to remove the node from clusternodes
3. push the config to the cluster with ccs_tool
4. stop rgmanager on the node to be removed
5. stop cman on the node to be removed.
At this point, clustat on a surviving node shows:
Cluster Status for EDISV1DBM @ Mon Jun 21 09:46:45 2010
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
svXprdclu002 2 Online,
Local, rgmanager
svXprdclu003 3 Online,
rgmanager
svXprdclu004 4 Online,
rgmanager
svXprdclu005 5 Online,
rgmanager
svXprdclu001 1 Offline,
Estranged
Service Name Owner (Last)
State
------- ---- ----- ------
-----
service:ACTIVESITE svXprdclu002
started
service:MASTERVIP svXprdclu002
started
The removed node (svXprdclu001) is still known to the cluster, but is
now "estranged".
The node has been removed from the "Expected votes" count, but not the
"Nodes" count:
sudo /usr/sbin/cman_tool status
Version: 6.2.0
Config Version: 19
Cluster Name: EDISV1DBM
Cluster Id: 35945
Cluster Member: Yes
Cluster Generation: 1008
Membership state: Cluster-Member
Nodes: 5
Expected votes: 4
Total votes: 4
Quorum: 3
Active subsystems: 8
Flags: Dirty
Ports Bound: 0 177
Node name: svXprdclu004
Node ID: 4
Multicast addresses: 239.192.0.1
Node addresses: 10.3.18.24
If I then choose a node (not running the services) and restart cman,
this node no longer _sees_ the removed node:
[martin at cp1edidbm003 ~]$ sudo /usr/sbin/clustat
Cluster Status for EDISV1DBM @ Mon Jun 21 09:53:34 2010
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
svXprdclu002 2 Online
svXprdclu003 3 Online,
Local
svXprdclu004 4 Online
svXprdclu005 5 Online
[martin at cp1edidbm003 ~]$ sudo /usr/sbin/cman_tool status
Version: 6.2.0
Config Version: 19
Cluster Name: EDISV1DBM
Cluster Id: 35945
Cluster Member: Yes
Cluster Generation: 1008
Membership state: Cluster-Member
Nodes: 4
Expected votes: 4
Total votes: 4
Quorum: 3
Active subsystems: 7
Flags: Dirty
Ports Bound: 0
Node name: svXprdclu003
Node ID: 3
Multicast addresses: 239.192.0.1
Node addresses: 10.3.18.23
However, I would prefer not to relocate my services in order to restart
cman on every node.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20100621/2b9fdd88/attachment.htm>
More information about the Linux-cluster
mailing list