[Cluster-devel] cluster/doc usage.txt
teigland at sourceware.org
teigland at sourceware.org
Thu Oct 5 14:20:30 UTC 2006
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: teigland at sourceware.org 2006-10-05 14:20:29
Modified files:
doc : usage.txt
Log message:
updates
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/doc/usage.txt.diff?cvsroot=cluster&r1=1.33&r2=1.34
--- cluster/doc/usage.txt 2006/08/11 15:18:06 1.33
+++ cluster/doc/usage.txt 2006/10/05 14:20:29 1.34
@@ -4,48 +4,42 @@
http://sources.redhat.com/cluster/
-Get source
-----------
+Install
+-------
-Get a kernel that has GFS2 and DLM.
- git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6.git
-
-Get the 'cluster' cvs tree, instructions at:
- http://sources.redhat.com/cluster/
-
-Optionally, get the LVM2 cvs from
- cvs -d :pserver:cvs at sources.redhat.com:/cvs/lvm2
-
-Build and install
------------------
-
-Compile kernel with GFS2, DLM, configfs, IPV6 and SCTP.
-
-Build and install the latest openais development from
+Install a Linux kernel with GFS2, DLM, configfs, IPV6 and SCTP,
+ 2.6.19-rc1 or later
+
+Install openais
+ get the latest "whitetank" (stable) release from
http://developer.osdl.org/dev/openais/
-or install subversion and type:
+ or
svn checkout http://svn.osdl.org/openais
-Then:
- cd /path/to/openais/branches/whitetank
+ cd openais/branches/whitetank
make; make install DESTDIR=/
-Build and install the latest libvolume_id from the udev tarball
+Install libvolume_id
+ from udev-094 or later, e.g.
http://www.us.kernel.org/pub/linux/utils/kernel/hotplug/udev-094.tar.bz2
make EXTRAS="extras/volume_id" install
-Build and install from the cluster CVS tree:
-
- cd cluster
- ./configure --kernel_src=/path/to/kernel
- make ; make install
- depmod -a
-(New kernel module dependencies aren't automatically generated)
-
-To build LVM2 & clvm:
-
- cd LVM2
- ./configure --with-clvmd=cman --with-cluster=shared
- make; make install
+Install the cluster CVS tree from source:
+ cvs -d :pserver:cvs at sources.redhat.com:/cvs/cluster login cvs
+ cvs -d :pserver:cvs at sources.redhat.com:/cvs/cluster checkout cluster
+ the password is "cvs"
+ cd cluster
+ ./configure --kernel_src=/path/to/kernel
+ make install
+ (this builds and installs some optional things like gfs(1) also)
+
+Install LVM2/CLVM (optional)
+ cvs -d :pserver:cvs at sources.redhat.com:/cvs/lvm2 login cvs
+ cvs -d :pserver:cvs at sources.redhat.com:/cvs/lvm2 checkout LVM2
+ cvs -d :pserver:cvs at sources.redhat.com:/cvs/lvm2
+ the password is "cvs"
+ cd LVM2
+ ./configure --with-clvmd=cman --with-cluster=shared
+ make; make install
Load kernel modules
@@ -55,7 +49,7 @@
modprobe lock_dlm
modprobe lock_nolock
modprobe dlm
-modprobe gfs
+
Configuration
-------------
@@ -73,12 +67,13 @@
If you already have a cluster.conf file with no nodeids in it, then you can
use the 'ccs_tool addnodeids' command to add them.
+
Example cluster.conf
--------------------
This is a basic cluster.conf file that uses manual fencing. The node
names should resolve to the address on the network interface you want to
-use for cman/dlm communication.
+use for openais/cman/dlm communication.
<?xml version="1.0"?>
<cluster name="alpha" config_version="1">
@@ -120,7 +115,7 @@
-----------------
Run these commands on each cluster node:
-debug/verbose options in [] can be useful at this stage :)
+debug/verbose options are in []
> mount -t configfs none /sys/kernel/config
> ccsd -X
@@ -130,12 +125,10 @@
> fence_tool join
> dlm_controld [-D]
> gfs_controld [-D]
+> clvmd (optional)
> mkfs -t gfs2 -p lock_dlm -t <clustername>:<fsname> -j <#journals> <blockdev>
> mount -t gfs2 [-v] <blockdev> <mountpoint>
-> group_tool ls
- Shows registered groups, similar to what cman_tool services did.
-
Notes:
- <clustername> in mkfs should match the one in cluster.conf.
- <fsname> in mkfs is any name you pick, each fs must have a different name.
@@ -144,6 +137,12 @@
- To avoid unnecessary fencing when starting the cluster, it's best for
all nodes to join the cluster (complete cman_tool join) before any
of them do fence_tool join.
+- The cman_tool "status" and "nodes" options show the status and members
+ of the cluster.
+- The group_tool command shows all local groups which includes the
+ fencing group, dlm lockspaces and gfs mounts.
+- The "cman" init script can be used for starting everything up through
+ gfs_controld in the list above.
Shutdown procedure
@@ -151,13 +150,13 @@
Run these commands on each cluster node:
-> umount -t gfs2 [-v] <mountpoint>
+> umount [-v] <mountpoint>
> fence_tool leave
> cman_tool leave
Notes:
-- You need util-linux 2.13-pre6 version of umount(8), older versions do not
- call the umount.gfs2 helper.
+- You need the util-linux 2.13-pre6 version of umount(8) or later,
+ older versions do not call the umount.gfs2 helper.
Converting from GFS1 to GFS2
@@ -167,18 +166,16 @@
this procedure:
1. Back up your entire filesystem first.
-
e.g. cp /dev/your_vg/lvol0 /your_gfs_backup
-2. Run gfs_fsck to ensure filesystem integrity.
-
+2. Run fsck to ensure filesystem integrity.
e.g. gfs2_fsck /dev/your_vg/lvol0
3. Make sure the filesystem is not mounted from any node.
-
e.g. for i in `grep "<clusternode name" /etc/cluster/cluster.conf | cut -d '"' -f2` ; do ssh $i "mount | grep gfs" ; done
4. Make sure you have the latest software versions.
-5. Run gfs2_convert <blockdev> from one of the nodes.
+5. Run gfs2_convert <blockdev> from one of the nodes.
e.g. gfs2_convert /dev/your_vg/lvol0
+
More information about the Cluster-devel
mailing list