[Linux-cluster] CMAN nodes in different LANs

Terance Dias terance at socialtwist.com
Mon Oct 29 05:24:38 UTC 2012


Thanks for your reply Fabio. I think the problem may be at our end. Our
infrastructure is on Amazon EC2 and it turns out that the interfaces file
of a EC2 node does not have reference to its public IP address.

On Mon, Oct 22, 2012 at 1:03 PM, Fabio M. Di Nitto <fdinitto at redhat.com>wrote:

> On 10/17/2012 03:12 PM, Terance Dias wrote:
> > Hi,
> >
> > We're trying to create a cluster in which the nodes lie in 2 different
> > LANs. Since the nodes lie in different networks, they cannot resolve the
> > other node by their internal IP. So in my cluster.conf file, I've
> > provided their external IPs. But now when I start CMAN service, I get
> > the following error.
> >
>
> First of all, we never tested nodes on different LANs, so you might have
> issues there that we are not aware of (beside that, latency between
> nodes *MUST* be < 2ms).
>
> As for the IP/name that should work, but I recall fixing something
> related not too long ago.
>
> What version of cman did you install and which distribution/OS?
>
> Fabio
>
> > -----------------------------------
> >
> > Starting cluster:
> >    Checking Network Manager... [  OK  ]
> >    Global setup... [  OK  ]
> >    Loading kernel modules... [  OK  ]
> >    Mounting configfs... [  OK  ]
> >    Starting cman... Cannot find node name in cluster.conf
> > Unable to get the configuration
> > Cannot find node name in cluster.conf
> > cman_tool: corosync daemon didn't start
> > [FAILED]
> >
> > -------------------------------------
> >
> > My cluster.conf file is as below
> >
> > -------------------------------------
> >
> > <?xml version="1.0"?>
> > <!--
> > This is an example of a cluster.conf file to run qpidd HA under
> rgmanager.
> >
> > NOTE: fencing is not shown, you must configure fencing appropriately for
> > your cluster.
> > -->
> >
> > <cluster name="test-cluster" config_version="18">
> >   <!-- The cluster has 2 nodes. Each has a unique nodid and one vote
> >        for quorum. -->
> >   <clusternodes>
> >     <clusternode name="/external-ip-1/" nodeid="1"/>
> >     <clusternode name="/external-ip-2/" nodeid="2"/>
> >   </clusternodes>
> >   <cman two_node="1" expected_votes="1" transport="udpu">
> >   </cman>
> >   <!-- Resouce Manager configuration. -->
> >   <rm>
> >     <!--
> >         There is a failoverdomain for each node containing just that
> node.
> >         This lets us stipulate that the qpidd service should always run
> > on each node.
> >     -->
> >     <failoverdomains>
> >       <failoverdomain name="east-domain" restricted="1">
> >         <failoverdomainnode name="/external-ip-1/"/>
> >       </failoverdomain>
> >       <failoverdomain name="west-domain" restricted="1">
> >         <failoverdomainnode name="/external-ip-2/"/>
> >       </failoverdomain>
> >     </failoverdomains>
> >
> >     <resources>
> >       <!-- This script starts a qpidd broker acting as a backup. -->
> >       <script file="/usr/local/etc/init.d/qpidd" name="qpidd"/>
> >
> >       <!-- This script promotes the qpidd broker on this node to
> > primary. -->
> >       <script file="/usr/local/etc/init.d/qpidd-primary"
> > name="qpidd-primary"/>
> >     </resources>
> >
> >     <!-- There is a qpidd service on each node, it should be restarted
> > if it fails. -->
> >     <service name="east-qpidd-service" domain="east-domain"
> > recovery="restart">
> >       <script ref="qpidd"/>
> >     </service>
> >     <service name="west-qpidd-service" domain="west-domain"
> > recovery="restart">
> >       <script ref="qpidd"/>
> >     </service>
> >
> >     <!-- There should always be a single qpidd-primary service, it can
> > run on any node. -->
> >     <service name="qpidd-primary-service" autostart="1" exclusive="0"
> > recovery="relocate">
> >       <script ref="qpidd-primary"/>
> >     </service>
> >   </rm>
> > </cluster>
> > ------------------------------------------------
> >
> > Thanks,
> > Terance
> >
> >
> >
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20121029/c4dbfe97/attachment.htm>


More information about the Linux-cluster mailing list