This happens because one node of the cluster has two interfaces on the same network segment with an IP in the same subnet. This node sends out cluster messages by the wrong source IP instead of the IP defined in the /etc/cluster/cluster.conf.
To solve the issue, just need shutdown the IP that is not defined in /etc/cluster/cluster.conf.
What is the exact message?
On Mon, 2010-01-25 at 17:54 +0500, Muhammad Ammad Shah wrote:
> Dear Rajat,
> I have configured two node cluster and its working fine for SAN (ext3
> file system). after this i configured GFS using following.
> root# pvcreate /dev/sdb
> root#vgcreate -c y vg1_gfs /dev/sdc1
> root#lvcreate -n db_store -l 100%FREE vg1_gfs
> root#/etc/init.d/clvmd start
> Started on both nodes.
> root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j
> 4 /dev/vg1_gfs/db_store
> root# service gfs start
> root#chkconfig --level 345 clvmd on
> root#chkconfig --level 345 gfs on
> the problem is, as i changed File system (ex3) resource to GFS
> nodes are rebooting..
> there is nothing in /var/log/messages. but when i checked console of
> the node there was some message related to GFS.
> DLM id:0 ...