[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Configuration of a 2 node HA cluster with gfs



I have just installed the cluster package, and I am now looking for some help on how to use it :-)

I have a lot of experience with Veritas FirstWatch and some with SunCluster, so I am not new to HA services. Now I have this server that I have to get up and running as quickly as possible... And I find little documentation about how to get this software up and running from a fresh install.

I have one old file server with external scsi disks, and one new server with a Nexsan ATAboy RAID array.
I want to set up the new file server as half of a 2-node cluster and get it into production. Then move over data (and disks) from the old server until I can reinstall that one as the second cluster node.


I first thought along the lines I am used to from Solaris and clustering. I wanted to set up 2 services: 1 NFS service that would take the disk with it when it moved, and 1 samba service that NFS-mounted the disk from the nfs service.

After looking at the redhat stuff I am thinking:
- Mount the disks permanently on both nodes using gfs (less chance of nuking the file systems because of a split-brain)
- Perhaps also run NFS services permanently on both nodes, failing over only the IP address of the official NFS service. Should make failover even faster, but are there pitfalls to running multiple NFS servers off the same gfs file system? In addition to failing over the IP address, I would have to look into how to take along NFS file locks when doing a takeover.


Can anyone 'talk me through' the steps needed to get this up and running?
I have tried to create /etc/cluster/cluster.conf, but ccsd fails with
Failed to connect to cluster manager.
Hint: Magma plugins are not in the right spot.


-- birger


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]