[Linux-cluster] Quorum disk

Tomasz Sucharzewski tsucharz at poczta.onet.pl
Wed Feb 18 19:56:51 UTC 2009


I had the same issue and I solved it,
Just increase quorum check interval. 2 seconds is to less to inform  
cman about quorum status.
I had to increase it  to 7 seconds but remember it also influence cman  
timeout which must be verified.
Best regards,
Tomek

On Feb 17, 2009, at 9:12 PM, Hunt, Gary wrote:

> Having an issue with my 2 node cluster.  Think it is related to the  
> quorum disk.
>
> 2 node RHEL 5.3 cluster with quorum disk.  Virtual servers running  
> on each node.
>
> Whenever node1 takes over the master role in qdisk it looses quorum  
> and restarts all the virtual servers.  It does regain quorum a few  
> seconds later.  If node1 is already the master and I fail node2;  
> things work as expected.  Node2 doesn’t seem to have a problem  
> taking over master role.
>
> Whenever node1 needs to take over master role the cluster looses  
> quorum.  Here is my cluster.conf.  Any suggestions on what may be  
> causing this?
>
> <?xml version="1.0"?>
> <cluster alias="xencluster" config_version="13" name="xencluster">
>         <fence_daemon clean_start="0" post_fail_delay="0"  
> post_join_delay="3"/>
>         <clusternodes>
>                 <clusternode name="ricci2b.gallup.com" nodeid="2"  
> votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="ricci2b"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>                 <clusternode name="ricci1b.gallup.com" nodeid="1"  
> votes="1">
>                         <fence>
>                                 <method name="1">
>                                         <device name="ricci1b"/>
>                                 </method>
>                         </fence>
>                 </clusternode>
>         </clusternodes>
>         <cman expected_votes="3" two_node="0"/>
>         <fencedevices>
>                 <fencedevice agent="fence_ipmilan"  
> ipaddr="172.30.3.110" login="xxxx" name="ricci1b" passwd="xxxxxx"/>
>                 <fencedevice agent="fence_ipmilan"  
> ipaddr="172.30.3.140" login="xxxx" name="ricci2b" passwd="xxxxxx"/>
>         </fencedevices>
>         <rm>
>                 <failoverdomains/>
>                 <resources/>
>                 <vm autostart="1" exclusive="0" name="rhel_full"  
> path="/xenconfigs" recovery="restart"/>
>                 <vm autostart="1" exclusive="0" name="rhel_para"  
> path="/xenconfigs" recovery="restart"/>
>         </rm>
>         <quorumd interval="2" label="quorum_disk_from_ricci1"  
> min_score="1" tko="3" votes="1"/>
>         <totem consensus="4800" join="60" token="12000"  
> token_retransmits_before_loss_const="20"/>
> </cluster>
>
>
> Thanks
>
> Gary
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090218/225a9858/attachment.htm>


More information about the Linux-cluster mailing list