[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] how to handle fence for a simple apache active/passive cluster with virtual ip on 2 virtual machine

Ooooh, I'm not sure what option you have then. I suppose fence_virtd/fence_xvm is your best option, but you're going to need to have the admin configure the fence_virtd side.

On 01/02/14 03:50 PM, nik600 wrote:
My problem is that i don't have root access at host level.

Il 01/feb/2014 19:49 "Digimer" <lists alteeve ca
<mailto:lists alteeve ca>> ha scritto:

    On 01/02/14 01:35 PM, nik600 wrote:

        Dear all

        i need some clarification about clustering with rhel 6.4

        i have a cluster with 2 node in active/passive configuration, i
        want to have a virtual ip and migrate it between 2 nodes.

        i've noticed that if i reboot or manually shut down a node the
        works correctly, but if i power-off one node the cluster doesn't
        failover on the other node.

        Another stange situation is that if power off all the nodes and then
        switch on only one the cluster doesn't start on the active node.

        I've read manual and documentation at


        and i've understand that the problem is related to fencing, but the
        problem is that my 2 nodes are on 2 virtual machine , i can't
        hardware and can't issue any custom command on the host-side.

        I've tried to use fence_xvm but i'm not sure about it because if
        my VM
        has powered-off, how can it reply to fence_vxm messags?

        Here my logs when i power off the VM:

        ==> /var/log/cluster/fenced.log <==
        Feb 01 18:50:22 fenced fencing node mynode02
        Feb 01 18:50:53 fenced fence mynode02 dev 0.0 agent fence_xvm
        error from agent
        Feb 01 18:50:53 fenced fence mynode02 failed

        I've tried to force the manual fence with:

        fence_ack_manual mynode02

        and in this case the failover works properly.

        The point is: as i'm not using any shared filesystem but i'm only
        sharing apache with a virtual ip, i won't have any split-brain
        so i don't need fencing, or not?

        So, is there the possibility to have a simple "dummy" fencing?

        here is my config.xml:

        <?xml version="1.0"?>
        <cluster config_version="20" name="hacluster">
                  <fence_daemon clean_start="0" post_fail_delay="0"
                  <cman expected_votes="1" two_node="1"/>
                          <clusternode name="mynode01" nodeid="1" votes="1">
                                          <method name="mynode01">
                                                  <device domain="mynode01"
                          <clusternode name="mynode02" nodeid="2" votes="1">
                                          <method name="mynode02">
                                                  <device domain="mynode02"
                          <fencedevice agent="fence_xvm" name="mynode01"/>
                          <fencedevice agent="fence_xvm" name="mynode02"/>
                  <rm log_level="7">
                                  <failoverdomain name="MYSERVICE"
        ordered="0" restricted="0">
                          <service autostart="1" exclusive="0"
                                  <ip address=""
        <apache config_file="conf/httpd.conf" name="apache"
        server_root="/etc/httpd" shutdown_wait="0"/>

        Thanks to all in advance.

    The fence_virtd/fence_xvm agent works by using multicast to talk to
    the VM host. So the "off" confirmation comes from the hypervisor,
    not the target.

    Depending on your setup, you might find better luck with fence_virsh
    (I have to use this as there is a known multicast issue with Fedora
    hosts). Can you try, as a test if nothing else, if 'fence_virsh'
    will work for you?

    fence_virsh -a <host ip> -l root -p <host root pw> -n <virsh name
    for target vm> -o status

    If this works, it should be trivial to add to cluster.conf. If that
    works, then you have a working fence method. However, I would
    recommend switching back to fence_xvm if you can. The fence_virsh
    agent is dependent on libvirtd running, which some consider a risk.


    Papers and Projects: https://alteeve.ca/w/
    What if the cure for cancer is trapped in the mind of a person
    without access to education?

    Linux-cluster mailing list
    Linux-cluster redhat com <mailto:Linux-cluster redhat com>

Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without access to education?

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]