[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] CMAN nodes in different LANs



On 10/17/2012 03:12 PM, Terance Dias wrote:
> Hi,
> 
> We're trying to create a cluster in which the nodes lie in 2 different
> LANs. Since the nodes lie in different networks, they cannot resolve the
> other node by their internal IP. So in my cluster.conf file, I've
> provided their external IPs. But now when I start CMAN service, I get
> the following error.
> 

First of all, we never tested nodes on different LANs, so you might have
issues there that we are not aware of (beside that, latency between
nodes *MUST* be < 2ms).

As for the IP/name that should work, but I recall fixing something
related not too long ago.

What version of cman did you install and which distribution/OS?

Fabio

> -----------------------------------
> 
> Starting cluster:
>    Checking Network Manager... [  OK  ]
>    Global setup... [  OK  ]
>    Loading kernel modules... [  OK  ]
>    Mounting configfs... [  OK  ]
>    Starting cman... Cannot find node name in cluster.conf
> Unable to get the configuration
> Cannot find node name in cluster.conf
> cman_tool: corosync daemon didn't start
> [FAILED]
> 
> -------------------------------------
> 
> My cluster.conf file is as below
> 
> -------------------------------------
> 
> <?xml version="1.0"?>
> <!--
> This is an example of a cluster.conf file to run qpidd HA under rgmanager.
> 
> NOTE: fencing is not shown, you must configure fencing appropriately for
> your cluster.
> -->
> 
> <cluster name="test-cluster" config_version="18">
>   <!-- The cluster has 2 nodes. Each has a unique nodid and one vote
>        for quorum. -->
>   <clusternodes>
>     <clusternode name="/external-ip-1/" nodeid="1"/>
>     <clusternode name="/external-ip-2/" nodeid="2"/>
>   </clusternodes>
>   <cman two_node="1" expected_votes="1" transport="udpu">
>   </cman>
>   <!-- Resouce Manager configuration. -->
>   <rm>
>     <!--
>         There is a failoverdomain for each node containing just that node.
>         This lets us stipulate that the qpidd service should always run
> on each node.
>     -->
>     <failoverdomains>
>       <failoverdomain name="east-domain" restricted="1">
>         <failoverdomainnode name="/external-ip-1/"/>
>       </failoverdomain>
>       <failoverdomain name="west-domain" restricted="1">
>         <failoverdomainnode name="/external-ip-2/"/>
>       </failoverdomain>
>     </failoverdomains>
> 
>     <resources>
>       <!-- This script starts a qpidd broker acting as a backup. -->
>       <script file="/usr/local/etc/init.d/qpidd" name="qpidd"/>
> 
>       <!-- This script promotes the qpidd broker on this node to
> primary. -->
>       <script file="/usr/local/etc/init.d/qpidd-primary"
> name="qpidd-primary"/>
>     </resources>
> 
>     <!-- There is a qpidd service on each node, it should be restarted
> if it fails. -->
>     <service name="east-qpidd-service" domain="east-domain"
> recovery="restart">
>       <script ref="qpidd"/>
>     </service>
>     <service name="west-qpidd-service" domain="west-domain"
> recovery="restart">
>       <script ref="qpidd"/>
>     </service>
> 
>     <!-- There should always be a single qpidd-primary service, it can
> run on any node. -->
>     <service name="qpidd-primary-service" autostart="1" exclusive="0"
> recovery="relocate">
>       <script ref="qpidd-primary"/>
>     </service>
>   </rm>
> </cluster>
> ------------------------------------------------
> 
> Thanks,
> Terance
> 
> 
> 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]