[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] help



Hello Emmanuel
 
config file here:
I copied only some parts of  it
 
<?xml version="1.0"?>
<cluster alias="testsapcluster" config_version="169" name="testsapcluster">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="sapclsn1.sedas.com" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_node1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="sapclsn2.sedas.com" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_node2"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="3" quorum_dev_poll="70000" two_node="0"/>
        <totem token="70000"/>
                      <quorumd interval="4" label="quorum" min_score="1" tko="15" votes="1" master_wins="1"/>
</cluster>

From: emmanuel segura <emi2fast gmail com>
To: AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz yahoo com>; linux clustering <linux-cluster redhat com>
Sent: Wednesday, May 30, 2012 11:28 AM
Subject: Re: [Linux-cluster] help

Hello AKIN

can you show me your full cluster config and your cluster log?

If you think it's a problem with quorum disk, you can monitor the operation speed with this command

mkqdisk -d -L

2012/5/30 AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz yahoo com>
Hello  Digimer
 
I am using qdisk with <quorumd interval="4" label="quorum" min_score="1" tko="15" votes="1" master_wins="1"/>
not heursitic.qdisk   is suggested for avoidin split brain
 
in this configuration my problem is :  node1 is master and continuosly node2 killed by node1
its connection is over iscsi.
ı thınk it is timing problem
doyou have any idea?
 
 
var/log/messages like that:
 
sapclsn2 clurgmgrd[8761]: <notice> Service service:sap is stopped
openais[6755]: [CMAN ] cman killed by node 1 because we were killed by cman_tool or other application 

openais[6755]: [SERV ] AIS Executive exiting (reason: CMAN kill requested, exiting). 
fenced[6788]: cluster is down, exiting

From: Digimer <lists alteeve ca>
To: AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz yahoo com>; linux clustering <linux-cluster redhat com>
Sent: Tuesday, May 29, 2012 10:46 PM
Subject: Re: [Linux-cluster] help

On 05/29/2012 02:54 AM, AKIN ÿffffffffffd6ZTOPUZ wrote:
>    Hello

> I need configuration steps about 2 nodes  rhel(5) cluster with quorum disk .

> my config  like that  :

> each node has 1 vote
> quorum disk has 1 vote
> cman expected vote is 3
> quorum will be protected in 2 votes  so when one node is down cluster
> will be up

> whats other things should I care about it  in cluster config?

> thanks in advance

First off, I *strongly* recommend using RHEL6, not 5. There have been
many improvements, plus, EL6 will be supported longer.

What are you trying to do with your cluster? I suspect that a quorum
disk is not necessary, though if you have a SAN anyway, I wouldn't argue
against using it. Unless you need the heuristics though, it's probably
not needed. You can safely build a 2-node cluster, you just need to make
sure your fence devices work (which is needed, regardless).

I have a tutorial that shows how to implement a cluster using cman,
clvmd and other components. It also discusses the cluster components
from a high-level, which may help.

https://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial

--
Digimer
Papers and Projects: https://alteeve.com



--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]