[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] updating cluster.conf on one node, when the other is down



Hi,
taking node2 down, updating the cluster configuration on node1 then
using "cman_tool version -r 7" on node1 and then booting node2 gives the
error below:

corosync[1790]:   [QUORUM] This node is within the primary component and will provide service.
corosync[1790]:   [QUORUM] Members[1]: 
corosync[1790]:   [QUORUM]     2 
corosync[1790]:   [CLM   ] CLM CONFIGURATION CHANGE
corosync[1790]:   [CLM   ] New Configuration:
corosync[1790]:   [CLM   ] 	r(0) ip(192.168.122.228) 
corosync[1790]:   [CLM   ] Members Left:
corosync[1790]:   [CLM   ] Members Joined:
corosync[1790]:   [CLM   ] CLM CONFIGURATION CHANGE
corosync[1790]:   [CLM   ] New Configuration:
corosync[1790]:   [CLM   ] 	r(0) ip(192.168.122.82) 
corosync[1790]:   [CLM   ] 	r(0) ip(192.168.122.228) 
corosync[1790]:   [CLM   ] Members Left:
corosync[1790]:   [CLM   ] Members Joined:
corosync[1790]:   [CLM   ] 	r(0) ip(192.168.122.82) 
corosync[1790]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
corosync[1790]:   [CMAN  ] Can't get updated config version 7, config file is version 5.
corosync[1790]:   [QUORUM] This node is within the primary component and will provide service.
corosync[1790]:   [QUORUM] Members[1]: 
corosync[1790]:   [QUORUM]     2 
corosync[1790]:   [CMAN  ] Node 1 conflict, remote config version id=7, local=5
corosync[1790]:   [MAIN  ] Completed service synchronization, ready to provide service.
corosync[1790]:   [CMAN  ] Can't get updated config version 7, config file is version 5.

Afterwards corosync is spinning with 100% cpu usage. This is cluster
3.0.0 with corosync/openais 1.0.0. cluster.conf is attached. Any ideas?
Cheers,
 -- Guido
<?xml version="1.0"?>
<cluster config_version="7" name="agx">
  <cman two_node="1" expected_votes="2">
  </cman>
  <dlm log_debug="1"/>
  <clusternodes>
    <clusternode name="node1.foo.bar" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="fence1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="node2.foo.bar" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="fence2"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>

  <fencedevices>
    <fencedevice agent="fence_xvm" domain="node1" name="fence1"/>
    <fencedevice agent="fence_xvm" domain="node2" name="fence2"/>
  </fencedevices>

  <rm log_level="7">
   <failoverdomains>
      <failoverdomain name="kvm-hosts" ordered="1">
        <failoverdomainnode name="node1.foo.bar"/>
        <failoverdomainnode name="node2.foo.bar"/>
      </failoverdomain>
   </failoverdomains>
   <resources>
       <virt name="test11" />
       <virt name="test12" />
   </resources>
   <service name="test11">
        <virt ref="test11"/>
   </service>
   <service name="test12">
        <virt ref="test12"/>
   </service>
  </rm>
</cluster>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]