[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] CLVMD hangs on 2nd node startup and hangs all gfs nodes.



I've been trying to setup a 3 server cluster with GFS mounted over iSCSI on Qemu Virtual Machines. A 4th server acts as a iSCSI Target. I found and article that explains my issue, but I can't seem to figure out what the solution is. QUOTED from : http://kbase.redhat.com/faq/FAQ_51_10923.shtm After successfully setting up a cluster, cman_tool shows the cluster is healthy. Mounting the gfs mount on the first node works successfully. However, when mounting gfs on the second node, the mount command hangs. Writing to a file on the first node also hangs. On the second node, the following error is seen in /var/log/messages: Jul 18 14:49:27 blade3 kernel: Lock_Harness 2.6.9-72.2 (built Apr 24 2007 12:45:55) installed Jul 18 14:49:27 blade3 kernel: GFS 2.6.9-72.2 (built Apr 24 2007 12:46:12) installed Jul 18 14:52:53 blade3 kernel: GFS: Trying to join cluster "lock_dlm", "vcomcluster:testgfs" Jul 18 14:52:53 blade3 kernel: Lock_DLM (built Apr 24 2007 12:45:57) installed Jul 18 14:52:53 blade3 kernel: dlm: connect from non cluster node Jul 18 14:52:53 blade3 kernel: dlm: connect from non cluster node END QUOTE My Virtual Machines only have one interface so I still can't figure out why this is happening. I can successfully mount the GFS partition on any one node but as soon as I try to start the clvmd on a 2nd node it hangs the whole cluster. I'm wondering if its a Qemu VM network issue? Each host can ping each other by name and ip. The cluster works fine but I cant get GFS to work on th VMs. Is it possible to debug the clvmd to see what IP Address it is sending? Thanks, Tracey Flanders

In a rush? Get real-time answers with Windows Live Messenger.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]