[Linux-cluster] Two nodes DRBD - Fail-Over Actif/Passif Cluster.

Gordan Bobic gordan at bobich.net
Tue Feb 15 21:09:14 UTC 2011


On 02/15/2011 08:50 PM, vincent.blondel at ing.be wrote:

>> below the cluster.conf file ...
>>
>>
>> <?xml version="1.0"?>
>> <cluster name="cluster" config_version="6">
>>    <!-- post_join_delay: number of seconds the daemon will wait before
>>                          fencing any victims after a node joins the domain
>>         post_fail_delay: number of seconds the daemon will wait before
>>                        fencing any victims after a domain member fails
>>         clean_start    : prevent any startup fencing the daemon might do.
>>                        It indicates that the daemon should assume all nodes
>>                        are in a clean state to start. -->
>>    <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
>>    <clusternodes>
>>      <clusternode name="reporter1.lab.intranet" votes="1" nodeid="1">
>>        <fence>
>>          <!-- Handle fencing manually -->
>>          <method name="human">
>>            <device name="human" nodename="reporter1.lab.intranet"/>
>>          </method>
>>        </fence>
>>      </clusternode>
>>      <clusternode name="reporter2.lab.intranet" votes="1" nodeid="2">
>>        <fence>
>>          <!-- Handle fencing manually -->
>>          <method name="human">
>>            <device name="human" nodename="reporter2.lab.intranet"/>
>>          </method>
>>        </fence>
>>      </clusternode>
>>    </clusternodes>
>>    <!-- cman two nodes specification -->
>>    <cman expected_votes="1" two_node="1"/>
>>    <fencedevices>
>>      <!-- Define manual fencing -->
>>      <fencedevice name="human" agent="fence_manual"/>
>>    </fencedevices>
>>    <rm>
>>       <failoverdomains>
>>          <failoverdomain name="example_pri" nofailback="0" ordered="1" restricted="0">
>>             <failoverdomainnode name="reporter1.lab.intranet" priority="1"/>
>>             <failoverdomainnode name="reporter2.lab.intranet" priority="2"/>
>>          </failoverdomain>
>>       </failoverdomains>
>>       <resources>
>>             <ip address="10.30.30.92" monitor_link="on" sleeptime="10"/>
>>             <apache config_file="conf/httpd.conf" name="example_server" server_root="/etc/httpd" shutdown_wait="0"/>
>>        </resources>
>>        <service autostart="1" domain="example_pri" exclusive="0" name="example_apache" recovery="relocate">
>>                  <ip ref="10.30.30.92"/>
>>                  <apache ref="example_server"/>
>>        </service>
>>    </rm>
>> </cluster>
>>
>> and this is the result I get on both servers ...
>>
>> [root at reporter1 ~]# clustat
>> Cluster Status for cluster @ Mon Feb 14 22:22:53 2011
>> Member Status: Quorate
>>
>>   Member Name                                      ID   Status
>>   ------ ----                                      ---- ------
>>   reporter1.lab.intranet                               1 Online, Local, rgmanager
>>   reporter2.lab.intranet                               2 Online, rgmanager
>>
>>   Service Name                            Owner (Last)                            State
>>   ------- ----                            ----- ------                            -----
>>   service:example_apache                  (none)                                  stopped
>>
>> as you can see, everything is stopped or in other words nothing runs .. so my question are :

Having a read through /var/log/messages for possible causes would be a 
good start.

>> do I have to configure manually load balanced ip 10.30.30.92 as an alias ip on both sides or is it done automatically by redhat cluster ?

RHCS will automatically assign the IP to an interface that is on the 
same subnet. You most definitely shouldn't create the IP manually on any 
of the nodes.

>> I just made a simple try with apache but I do not find anywhere reference to the start/stop script for apache in the examples, is that normal ??
>> do you have some best practice regarding this picture ??

I'm not familiar with the <apache> tag in cluster.conf, I usually 
configure most things as init script resources.

Gordan




More information about the Linux-cluster mailing list