[Linux-cluster] GFS fence and lock servers- test setup

Raj Kumar rajkum2002 at rediffmail.com
Mon Nov 22 21:18:17 UTC 2004


Hello All,  

I am just starting with GFS with a two node cluster setup. I am able to create and mount GFS filesystems on both the nodes. However, I did not understand completely how fencing and lock servers operate:

In my setup node1 runs lock servers. Both node1 and node2 use the GFS filesystem. Assume node1 is shutdown while node2 is accessing the shared storage. Since only node1 runs lock servers the cluster will hung up soon. What would be state of the cluster and files on the storage? If node1 is brought online will cluster operates normally. Would any of the files on shared storage corrupted. If yes, how to identify such files? 

From manual:

8.2.3. Starting LOCK_GULM Servers

If there are hung GFS nodes, reset them before starting lock_gulmd servers. Resetting the hung GFS nodes before starting lock_gulmd servers prevents file system corruption.

I suppose this section applies to the scenario I described above. What exactly it means "If there are hung GFS nodes, reset them before starting lock_gulmd servers"-- does this mean restart node2 or just kill ccsd, lock, and disassemble pools?

When a node is fenced, what exact sequence of operations is performed. Is the fenced node restarted? My GFS nodes also run very important services and restarting my cause adverse effects sometimes. What recovery operations are performed when a fenced node joins the cluster.

Can someone tell what other issues a system administrator should be concerned with operating GFS? 

Thanks in advance for your help!
Raj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20041122/cb0f9adc/attachment.htm>


More information about the Linux-cluster mailing list