[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Cluster-devel] cluster ccs/man/ccs.7 ccs/man/ccs_test.8 ccs/m ...



CVSROOT:	/cvs/cluster
Module name:	cluster
Branch: 	RHEL5
Changes by:	teigland sourceware org	2007-08-22 14:15:23

Modified files:
	ccs/man        : ccs.7 ccs_test.8 ccs_tool.8 ccsd.8 
	                 cluster.conf.5 
	dlm/man        : Makefile dlm_create_lockspace.3 dlm_lock.3 
	                 dlm_unlock.3 libdlm.3 
	fence/man      : fence.8 fence_tool.8 fenced.8 
	group          : Makefile 
Added files:
	dlm/man        : dlm_tool.8 
	group/man      : Makefile dlm_controld.8 gfs_controld.8 
	                 group_tool.8 groupd.8 

Log message:
	add and update man pages for cluster infrastructure

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/ccs/man/ccs.7.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3&r2=1.3.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/ccs/man/ccs_test.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5&r2=1.5.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/ccs/man/ccs_tool.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3&r2=1.3.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/ccs/man/ccsd.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.7&r2=1.7.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/ccs/man/cluster.conf.5.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.5.2.2&r2=1.5.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm/man/dlm_tool.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm/man/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3.2.1&r2=1.3.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm/man/dlm_create_lockspace.3.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3.2.1&r2=1.3.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm/man/dlm_lock.3.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3.2.1&r2=1.3.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm/man/dlm_unlock.3.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3.2.1&r2=1.3.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm/man/libdlm.3.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.2.2.1&r2=1.2.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/fence/man/fence.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.4.2.1&r2=1.4.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/fence/man/fence_tool.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.6.2.1&r2=1.6.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/fence/man/fenced.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.3&r2=1.3.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.7&r2=1.7.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/Makefile.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/dlm_controld.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/gfs_controld.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/group_tool.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/groupd.8.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=NONE&r2=1.1.2.1

--- cluster/ccs/man/ccs.7	2005/02/16 18:44:09	1.3
+++ cluster/ccs/man/ccs.7	2007/08/22 14:15:22	1.3.2.1
@@ -1,4 +1,4 @@
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 .\"  
 .\"  This copyrighted material is made available to anyone wishing to use,
 .\"  modify, copy, or redistribute it subject to the terms and conditions
@@ -10,41 +10,19 @@
 ccs - Cluster Configuration System
 
 .SH DESCRIPTION
-A cluster environment that shares resources has information that is
-essential to correct operation which must be available to
-every node in the cluster.  This information may include:
-names of the nodes composing the cluster, I/O fencing methods, and
-more.  \fICCS\fP is the system that makes it possible for the nodes in a
-cluster to retrieve the information they need.
-
-.SH OVERVIEW
-The following is a generic description of the steps one should take to produce
-a working CCS environment.
-
-.SS Step 1
-Choose a cluster name.  It is important to determine
-a name for the cluster before starting.  The cluster name is what
-binds a machine to specific resources that can only be shared by
-machines that are members of the same cluster name.
-
-.SS Step 2
-Create the directory \fI/etc/cluster\fP.
-
-.SS Step 3
-Create the \fI/etc/cluster/cluster.conf\fP file, according to the
-\fBcluster.conf(5)\fP man page, on one node in your cluster.
-
-.SS Step 4
-Start \fBccsd\fP and test the cluster.conf file by using \fBccs_test\fP.
-If you haven't started a cluster manager yet, you should use the 'force'
-option to \fBccs_test\fP - see the \fBccs_test(8)\fP man page for more info.
-
-If a failure occurs while parsing the config file, \fBccs_test\fP should
-report "ccs_connect failed: No data available" and /var/log/messages
-should report "Unable to parse /etc/cluster/cluster.conf".
 
-.SH FORMAT OF THE CCS FILE
-See \fBcluster.conf(5)\fP
+CCS is the system that manages the /etc/cluster/cluster.conf file on
+cluster nodes.  The primary users of cluster.conf are the cman cluster
+manager, the fenced fencing daemon and rgmanager which manages H/A
+services.
+
+libccs is the API used by the programs and daemons above to read
+cluster.conf information.  libccs requests go through the ccsd daemon.
+ccsd needs to be started before the cman cluster manager is started.
+
+The ccs_test program can be used to test the ccs system and read
+cluster.conf values.
 
 .SH SEE ALSO
-ccsd(8), ccs_tool(8), ccs_test(8), cluster.conf(5)
+ccsd(8), ccs_tool(8), ccs_test(8), cluster.conf(5), cman(5)
+
--- cluster/ccs/man/ccs_test.8	2005/02/16 18:44:09	1.5
+++ cluster/ccs/man/ccs_test.8	2007/08/22 14:15:22	1.5.2.1
@@ -1,4 +1,4 @@
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 .\"  
 .\"  This copyrighted material is made available to anyone wishing to use,
 .\"  modify, copy, or redistribute it subject to the terms and conditions
@@ -7,7 +7,7 @@
 .TH ccs_test 8
 
 .SH NAME
-ccs_test - The diagnostic tool for a running Cluster Configuration System.
+ccs_test - CCS daemon (ccsd) diagnostic tool
 
 .SH SYNOPSIS
 .B ccs_test
@@ -15,10 +15,8 @@
 <\fBcommand\fP>
 
 .SH DESCRIPTION
-\fBccs_test\fP is part of the Cluster Configuration System (CCS).  It
-is a diagnostic tool designed to validate the correct operation of a
-running CCS system.  It will communicate with the CCS daemon - \fBccsd\fP -
-to obtain any information stored in the system.
+\fBccs_test\fP is part of the Cluster Configuration System (CCS).  It is a
+diagnostic tool that reads cluster.conf information to test ccsd.
 
 .SH OPTIONS
 .TP
--- cluster/ccs/man/ccs_tool.8	2005/05/05 10:30:35	1.3
+++ cluster/ccs/man/ccs_tool.8	2007/08/22 14:15:22	1.3.2.1
@@ -14,10 +14,11 @@
 [\fIOPTION\fR].. <\fBcommand\fP>
 
 .SH "DESCRIPTION"
-\fBccs_tool\fP is part of the Cluster Configuration System (CCS).  It
-is used to make online updates of CCS config files.  Additionally, it
-can be used to upgrade old style (GFS <= 6.0) CCS archives to the new
-xml format.
+
+\fBccs_tool\fP is part of the Cluster Configuration System (CCS).  It is
+used to make online updates to cluster.conf.  It can also be used to
+upgrade old style (GFS <= 6.0) CCS archives to the new xml cluster.conf
+format.
 
 .SH "OPTIONS"
 .TP 
@@ -32,14 +33,9 @@
 .TP 
 \fBupdate\fP \fI<xml file>\fP
 This command is used to update the config file that ccsd is working with
-while the cluster is operational (i.e. online).  Run this on a single
-machine to update all instances of ccsd across the cluster.
-
-If you are using 'cman' as your cluster manager, you will also need to
-run \fBcman_tool version \-r <new version number>\fP once the update is
-complete.  Failure to do so will result in new nodes (or nodes rejoining
-after a failure) not being allowed
-to join the working set due to version number mismatches.
+while the cman cluster is operational (i.e. online).  Run this on a single
+machine to update cluster.conf on all current cluster members.  This also
+notfies cman of the new config version.
 
 .TP 
 \fBupgrade\fP \fI<location>\fP
@@ -183,6 +179,14 @@
 .br
 \-c <file>         Config file to create. Defaults to /etc/cluster/cluster.conf
 
+.TP 
+\fBaddnodeids\fP
+Adds node ID numbers to all the nodes in cluster.conf. In RHEL4, node IDs were optional
+and assigned by cman when a node joined the cluster. In RHEL5 they must be pre-assigned
+in cluster.conf. This command will not change any node IDs that are already set in 
+cluster.conf, it will simply add unique node ID numbers to nodes that do not already
+have them.
+
 
 .SH "SEE ALSO"
 ccs(7), ccsd(8), cluster.conf(5)
--- cluster/ccs/man/ccsd.8	2006/01/11 16:00:55	1.7
+++ cluster/ccs/man/ccsd.8	2007/08/22 14:15:22	1.7.2.1
@@ -1,5 +1,5 @@
 .\"  Copyright (C) Sistina Software, Inc.  1997-2003  All rights reserved.
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 .\"  
 .\"  This copyrighted material is made available to anyone wishing to use,
 .\"  modify, copy, or redistribute it subject to the terms and conditions
@@ -8,19 +8,27 @@
 .TH ccsd 8
 
 .SH NAME
-ccsd - The daemon used to access CCS cluster configuration files.
+ccsd - manages the /etc/cluster/cluster.conf file
 
 .SH SYNOPSIS
 .B ccsd
 [\fIOPTION\fR]..
 
 .SH DESCRIPTION
-\fBccsd\fP is part of the Cluster Configuration System (CCS).  It is the
-daemon which accesses cluster the configuration file for other cluster
-applications.  It must be run on each node that wishes to join a cluster.
+
+\fBccsd\fP is part of the Cluster Configuration System (CCS) and manages
+the cluster.conf file in a cman cluster.  It handles requests for
+cluster.conf information made through libccs.  It also keeps the
+cluster.conf file in sync among cluster nodes based on the value of
+cluster.conf:cluster/config_version.  ccsd may replace the local
+cluster.conf file if it discovers a newer version on another node.
 
 .SH OPTIONS
 .TP
+\fB-X\fP
+Disable all cluster manager (cman) and inter-node interactions. Simply
+respond to local libccs requests based on the current cluster.conf file.
+.TP
 \fB-4\fP
 Use IPv4 for inter-node communication.  By default, IPv6 is tried, then IPv4.
 .TP
@@ -66,4 +74,5 @@
 Print the version information.
 
 .SH SEE ALSO
-ccs(7), ccs_tool(8), ccs_test(8), cluster.conf(5)
+ccs(7), cman(5), ccs_tool(8), ccs_test(8), cluster.conf(5)
+
--- cluster/ccs/man/cluster.conf.5	2006/11/29 17:05:19	1.5.2.2
+++ cluster/ccs/man/cluster.conf.5	2007/08/22 14:15:22	1.5.2.3
@@ -1,222 +1,65 @@
 .\"
 .\"  Copyright 2001-2003 Sistina Software, Inc.
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 
 .TH cluster.conf 5
 
 .SH NAME
-cluster.conf - The configuration file for cluster products
+cluster.conf - configuration file for cman, fence, dlm, gfs, rgmanager
 
 .SH DESCRIPTION
+
 The \fBcluster.conf\fP file is located in the /etc/cluster directory.  It
-is the source of information used by the cluster products - accessed
-indirectly through CCS (see \fBccs(7)\fP).  This file contains all the
-information needed for the cluster to operate, such as: what nodes are in
-the cluster and how to I/O fence those nodes.  There is generic information
-that is applicable to all cluster infrastructures, as well as specific
-information relevant for specific cluster products.
-
-This man page describes the generic contents of the \fBcluster.conf\fP file.
-The product specific sections of \fBcluster.conf\fP are left to their
-respective man pages.  For example, after constructing the generic content,
-a user should look at the \fBcman(5)\fP man page for further information
-about the \fBcman\fP section of cluster.conf.
-
-The \fBcluster.conf\fP file is an XML file.  It has one encompassing section
-in which everything is contained.  That entity's name is \fIcluster\fP and it
-has two mandatory attributes: \fIname\fP and \fIconfig_version\fP.  The
-\fIname\fP attribute specifies the name of the cluster.  It is important
-that this name is unique from other clusters the user might set up.  The
-\fIconfig_version\fP attribute is a number used to identify the revision
-level of the \fBcluster.conf\fP file.  Given this information, your
+is the source of information used by cman, fence, dlm, gfs and rgmanager.
+It's accessed indirectly through libccs (see \fBccs(7)\fP).  This file
+contains all the information needed for the cluster to operate, such as
+what nodes are in the cluster and how to I/O fence those nodes.
+
+This man page describes the generic contents of the \fBcluster.conf\fP
+file.  For other information see man pages for cman(5), fenced(8) and
+dlm_controld(8).
+
+\fBcluster.conf\fP is an XML file.  It has one top-level \fIcluster\fP
+section containing everything else.  The cluster section has two mandatory
+attributes: \fIname\fP and \fIconfig_version\fP.  \fIname\fP can be up to
+16 characters long and specifies the name of the cluster.  It is important
+that this name is unique from other clusters the user might set up.
+\fIconfig_version\fP is a number used to identify the revision level of
+the \fBcluster.conf\fP file.  Given this information, your
 \fBcluster.conf\fP file might look something like:
 
-<cluster name="alpha" config_version="1">
+  <cluster name="alpha" config_version="1">
+  </cluster>
 
-</cluster>
+.SS Nodes
 
-You should specify a <cman/> tag even if no special cman parameters
-are needed for the cluster.
+The set of nodes that make up the cluster are defined under the
+\fIclusternodes\fP section.  A \fIclusternode\fP section defines each
+node.  A clusternode has two mandatory attributes:
+.I name
+and
+.I nodeid
+
+The name should correspond to the hostname (the fully qualified name is
+generally not necessary) on the network interface to be used for cluster
+communication.  Nodeid's must be greater than zero and unique.
+
+  <cluster name="alpha" config_version="1">
+          <clusternodes>
+                  <clusternode name="node-01" nodeid="1">
+                  </clusternode>
+
+                  <clusternode name="node-02" nodeid="2">
+                  </clusternode>
+
+                  <clusternode name="node-03" nodeid="3">
+                  </clusternode>
+          </clusternodes>
+  </cluster>
 
-A mandatory subsection of \fIcluster\fP is \fIfencedevices\fP.  It contains
-all of the I/O fencing devices at the disposal of the cluster.  The I/O
-fencing devices are listed as entities designated as \fIfencedevice\fP and have
-attributes that describe the particular fencing device.  For example:
-
-  <fencedevices>
-    <fencedevice name="apc" agent="fence_apc"
-            ipaddr="apc_1" login="apc" passwd="apc"/>
-  </fencedevices>
-
-Concerning the \fIfencedevice\fP entity, the \fIname\fP and \fIagent\fP attributes
-must be specified for all I/O fence devices.  The remaining attributes are
-device specific and are used to specify the necessary information to
-access the device.  The \fIname\fP attribute must be unique and is used to
-reference the I/O fence device in other sections of the \fBcluster.conf\fP file.  The \fIagent\fP attribute is used to specify the binary fence agent program used to communicate with the particular device.  Your \fBcluster.conf\fP file might now look something like:
-
-<cluster name="alpha" config_version="1">
-  <cman/>
-  <fencedevices>
-    <fencedevice name="apc" agent="fence_apc"
-            ipaddr="apc_1" login="apc" passwd="apc"/>
-
-    <fencedevice name="brocade" agent="fence_brocade"
-            ipaddr="brocade_1" login="bro" passwd="bro"/>
-
-    <!-- The WTI fence device requires no login name -->
-    <fencedevice name="wti" agent="fence_wti"
-            ipaddr="wti_1" passwd="wti"/>
-
-    <fencedevice name="last_resort" agent="fence_manual"/>
-  </fencedevices>
-</cluster>
-
-The final mandatory subsection of \fIcluster\fP is \fIclusternodes\fP.  It contains
-the individual specification of all the machines (members) in the cluster.
-Each machine has its own section, \fIclusternode\fP, which has the \fIname\fP
-attribute - this should be the name of the machine.  Each machine should be
-given a unique node id number with the option \fInodeid\fP attribute.
-For example, nodeid="3".  The \fIclusternode\fP section
-also contains the \fIfence\fP section.  Not to be confused with \fIfencedevices\fP the \fIfence\fP section is used to specify all the possible "methods" for
-fencing a particular machine, as well as the device used to perform that method
-and the machine specific parameters necessary.  By example, the \fIclusternodes\fP
-section may look as follows:
-
-  <!-- This example only contains one machine -->
-  <clusternodes>
-    <clusternode name="nd01" nodeid="1">
-      <fence>
-        <!-- "power" method is tried before all others -->
-        <method name="power">
-          <device name="apc" switch="1" port="1"/>
-        </method>
-        <!-- If the "power" method fails,
-             try fencing through the "fabric" -->
-        <method name="fabric">
-          <device name="brocade" port="1"/>
-        </method>
-	<!-- If all else fails,
-             make someone do it manually -->
-        <method name="human">
-          <device name="last_resort" ipaddr="nd01"/>
-        </method>
-      </fence>
-    </clusternode>
-  </clusternodes>  
-
-Putting it all together, a three node cluster's \fBcluster.conf\fP file
-might look like:
-
-
-<cluster name="example" config_version="1">
-  <cman/>
-  <clusternodes>
-    <clusternode name="nd01" nodeid="1">
-      <fence>
-        <!-- "power" method is tried before all others -->
-        <method name="power">
-          <device name="apc" switch="1" port="1"/>
-        </method>
-        <!-- If the "power" method fails,
-             try fencing through the "fabric" -->
-        <method name="fabric">
-          <device name="brocade" port="1"/>
-        </method>
-	<!-- If all else fails,
-             make someone do it manually -->
-        <method name="human">
-          <device name="last_resort" ipaddr="nd01"/>
-        </method>
-      </fence>
-    </clusternode>
-    <clusternode name="nd02" nodeid="2">
-      <fence>
-        <!-- "power" method is tried before all others -->
-        <method name="power">
-          <device name="apc" switch="1" port="2"/>
-        </method>
-        <!-- If the "power" method fails,
-             try fencing through the "fabric" -->
-        <method name="fabric">
-          <device name="brocade" port="2"/>
-        </method>
-	<!-- If all else fails,
-             make someone do it manually -->
-        <method name="human">
-          <device name="last_resort" ipaddr="nd02"/>
-        </method>
-      </fence>
-    </clusternode>
-    <clusternode name="nd11" nodeid="3">
-      <fence>
-        <!-- "power" method is tried before all others -->
-        <method name="power">
-          <!-- This machine has 2 power supplies -->
-          <device name="apc" switch="2" port="1"/>
-          <device name="wti" port="1"/>
-        </method>
-        <!-- If the "power" method fails,
-             try fencing through the "fabric" -->
-        <method name="fabric">
-          <device name="brocade" port="11"/>
-        </method>
-	<!-- If all else fails,
-             make someone do it manually -->
-        <method name="human">
-          <device name="last_resort" ipaddr="nd11"/>
-        </method>
-      </fence>
-    </clusternode>
-  </clusternodes>  
-
-  <fencedevices>
-    <fencedevice name="apc" agent="fence_apc"
-            ipaddr="apc_1" login="apc" passwd="apc"/>
-
-    <fencedevice name="brocade" agent="fence_brocade"
-            ipaddr="brocade_1" login="bro" passwd="bro"/>
-
-    <!-- The WTI fence device requires no login name -->
-    <fencedevice name="wti" agent="fence_wti"
-            ipaddr="wti_1" passwd="wti"/>
-
-    <fencedevice name="last_resort" agent="fence_manual"/>
-  </fencedevices>
-</cluster>
-
-\fBSpecial two-node cluster options:\fP
-
-Two-node clusters have special options in cluster.conf because they need to
-decide quorum between them without a majority of votes.  These options are
-placed with the <cman/> tag.  For example:
-
-
-  <cman two_node="1" expected_votes="1"/>
-
-
-\fBValidating your cluster.conf file:\fP
-
-While cluster.conf files produced by the system-config-cluster GUI are pretty
-certain to be well-formed, it is convenient to have a way to validate legacy
-configuration files, or files that were produced by hand in an editor. If you
-have the system-config-cluster GUI, you can validate a cluster.conf file with
-this command:
-
-xmllint --relaxng /usr/share/system-config-cluster/misc/cluster.ng /etc/cluster/cluster.conf
-
-If validation errors are detected in your conf file, the first place to start
-is with the first error.  Sometimes addressing the first error will remove 
-all error messages. Another good troubleshooting approach is to comment out 
-sections of the conf file.  For example, it is okay to have nothing beneath 
-the <rm> tag.  If you have services, failoverdomains and resources defined 
-there, temporarily comment them all out and rerun xmllint to see if the
-problems go away.  This may help you locate the problem.  Errors that 
-contain the string IDREF mean that an attribute value is supposed to be
-shared two places in the file, and that no other instance of the name string
-could be located. Finally, the most common problem with hand-edited 
-cluster.conf files is spelling errors. Check your attribute and tag names
-carefully.
+The next step in completing cluster.conf is adding fencing information;
+see fenced(8).
 
 .SH SEE ALSO
-ccs(7), ccs_tool(8), cman(5)
+ccs(7), ccs_tool(8), ccsd(8), cman(5), fenced(8), dlm_controld(8)
 
/cvs/cluster/cluster/dlm/man/dlm_tool.8,v  -->  standard output
revision 1.1.2.1
--- cluster/dlm/man/dlm_tool.8
+++ -	2007-08-22 14:15:23.867345000 +0000
@@ -0,0 +1,45 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH dlm_tool 8
+
+.SH NAME
+dlm_tool - A program to join and leave lockspaces and display dlm information
+
+.SH SYNOPSIS
+.B
+dlm_tool
+[\fIOPTIONS\fR]
+<\fBjoin | leave | lockdump | deadlock_check\fP>
+<\fBname\fP>
+
+.SH DESCRIPTION
+
+\fBdlm_tool\fP is a program used to join or leave dlm lockspaces, dump
+dlm lock state, and initiate deadlock detection cycles.  The name of a
+lockspace follows the subcommand.
+
+.SH OPTIONS
+.TP
+\fB-m\fP
+The permission mode (in octal) of the lockspace device created by join;
+default 0600.
+.TP
+\fB-M\fP
+Dump MSTCPY locks in addition to locks held by local processes.
+.TP
+\fB-d\fP <num>
+Resource directory enabled (1) or disabled (0) during join; default 0.
+.TP
+\fB-h\fP
+Help.  Print out the usage syntax.
+.TP
+\fB-V\fP
+Print version information.
+
+.SH SEE ALSO
+libdlm(3)
+
--- cluster/dlm/man/Makefile	2007/08/03 10:27:53	1.3.2.1
+++ cluster/dlm/man/Makefile	2007/08/22 14:15:22	1.3.2.2
@@ -12,19 +12,25 @@
 
 include ../make/defines.mk
 
-TARGETS = dlm_cleanup.3           dlm_lock_wait.3        dlm_open_lockspace.3    \
+TARGET3 = dlm_cleanup.3           dlm_lock_wait.3        dlm_open_lockspace.3    \
 	  dlm_close_lockspace.3   dlm_ls_lock.3          dlm_pthread_init.3      \
 	  dlm_create_lockspace.3  dlm_ls_lock_wait.3     dlm_release_lockspace.3 \
 	  dlm_dispatch.3          dlm_ls_pthread_init.3  dlm_unlock.3		 \
 	  dlm_get_fd.3            dlm_ls_unlock.3        dlm_unlock_wait.3	 \
 	  dlm_lock.3              dlm_ls_unlock_wait.3   libdlm.3
 
+TARGET8 = dlm_tool.8
 
 all:
 
+clean:
+
 install:
 	install -d ${mandir}/man3
-	install ${TARGETS} ${mandir}/man3
+	install ${TARGET3} ${mandir}/man3
+	install -d ${mandir}/man8
+	install ${TARGET8} ${mandir}/man8
 
 uninstall:
-	${UNINSTALL} ${TARGETS} ${mandir}/man3
+	${UNINSTALL} ${TARGET3} ${mandir}/man3
+	${UNINSTALL} ${TARGET8} ${mandir}/man8
--- cluster/dlm/man/dlm_create_lockspace.3	2007/08/03 10:27:53	1.3.2.1
+++ cluster/dlm/man/dlm_create_lockspace.3	2007/08/22 14:15:22	1.3.2.2
@@ -1,26 +1,28 @@
 .TH DLM_CREATE_LOCKSPACE 3 "July 5, 2007" "libdlm functions"
 .SH NAME
-dlm_create_lockspace, dlm_open_lockspace, dlm_close_lockspace, dlm_releas_lockspace \- manipulate DLM lockspaces
+dlm_create_lockspace, dlm_open_lockspace, dlm_close_lockspace, dlm_release_lockspace \- manipulate DLM lockspaces
 .SH SYNOPSIS
 .nf
  #include <libdlm.h>
 
 dlm_lshandle_t dlm_create_lockspace(const char *name, mode_t mode);
-dlm_lshandle_t dlm_new_lockspace(const char *name, mode_t mode, uint32_t flags);
+dlm_lshandle_t dlm_new_lockspace(const char *name, mode_t mode,
+                                 uint32_t flags);
 dlm_lshandle_t dlm_open_lockspace(const char *name);
-int dlm_close_lockspace(dlm_lshandle_t lockspace);
-int dlm_release_lockspace(const char *name, dlm_lshandle_t lockspace, int force)
+int dlm_close_lockspace(dlm_lshandle_t ls);
+int dlm_release_lockspace(const char *name, dlm_lshandle_t ls,
+                          int force);
 
 .fi
 .SH DESCRIPTION
 The DLM allows locks to be partitioned into "lockspaces", and these can be manipulated by userspace calls. It is possible (though not recommended) for an application to have multiple lockspaces open at one time. 
 
-Many of the DLM calls work on the "default" lockspace, which should be fine for most users. The calls with _ls_ in them allow you to isolate your application from all others running in the cluster. Remember, lockspaces are a cluster-wide resource, so if you create a lockspace called "myls" it will share locks with a lockspace called "myls" on all nodes. These calls allow users to create & remove lockspaces, and users to connecto to existing lockspace to store their locks there.
+Many of the DLM calls work on the "default" lockspace, which should be fine for most users. The calls with _ls_ in them allow you to isolate your application from all others running in the cluster. Remember, lockspaces are a cluster-wide resource, so if you create a lockspace called "myls" it will share locks with a lockspace called "myls" on all nodes. These calls allow users to create & remove lockspaces, and users to connect to existing lockspace to store their locks there.
 .PP
 .SS
 dlm_lshandle_t dlm_create_lockspace(const char *name, mode_t mode);
 .br
-This creates a lockspace called <name> and the mode of the file user to access it will be <mode> (subject to umask as usual). The lockspace must not already exist on this node, if it does -1 will be returned and errno will be set to EEXIST. If you really want to use this lockspace you can then user dlm_open_lockspace() below. The name is the name of a misc device that will be created in /dev/misc.
+This creates a lockspace called <name> and the mode of the file user to access it will be <mode> (subject to umask as usual). The lockspace must not already exist on this node, if it does -1 will be returned and errno will be set to EEXIST. If you really want to use this lockspace you can then use dlm_open_lockspace() below. The name is the name of a misc device that will be created in /dev/misc.
 .br
 On success a handle to the lockspace is returned, which can be used to pass into subsequent dlm_ls_lock/unlock calls. Make no assumptions as to the content of this handle as it's content may change in future.
 .br
@@ -37,17 +39,19 @@
 Any error returned by the open() system call
 .fi
 .SS
-int dlm_new_lockspace(const char *name, mode_t mode, uint32_t flags);
+int dlm_new_lockspace(const char *name, mode_t mode, uint32_t flags)
 .PP
 Performs the same function as 
 .B dlm_create_lockspace()
 above, but passes some creation flags to the call that affect the lockspace being created. Currently supported flags are:
 .nf
-DLM_LSFL_NODIR         
-DLM_LSFL_TIMEWARN
+DLM_LSFL_NODIR    the lockspace should not use a resource directory
+DLM_LSFL_TIMEWARN the dlm should emit warnings over netlink when locks
+                  have been waiting too long; required for deadlock
+                  detection
 .fi
 .SS
-int dlm_release_lockspace(const char *name, dlm_lshandle_t lockspace, int force)
+int dlm_release_lockspace(const char *name, dlm_lshandle_t ls, int force)
 .PP
 Deletes a lockspace. If the lockspace still has active locks then -1 will be returned and errno set to EBUSY. Both the lockspace handle /and/ the name must be specified. This call also closes the lockspace and stops the thread associated with the lockspace, if any.
 .br
@@ -60,25 +64,23 @@
 .nf
 EINVAL          An invalid parameter was passed to the call
 EPERM           Process does not have capability to release lockspaces
-EBUSY           The lockspace could not be freed because it still contains locks
-                and force was not set.
+EBUSY           The lockspace could not be freed because it still
+                contains locks and force was not set.
 .fi
 
 .SS
 dlm_lshandle_t dlm_open_lockspace(const char *name)
 .PP
 Opens an already existing lockspace and returns a handle to it.
-.br
+.PP
 Return codes:
-.br
 0 is returned if the call completed successfully. If not, -1 is returned and errno is set to an error returned by the open() system call
 .SS
-int dlm_close_lockspace(dlm_lshandle_t lockspace)
+int dlm_close_lockspace(dlm_lshandle_t ls)
 .br
 Close the lockspace. Any locks held by this process will be freed. If a thread is associated with this lockspace then it will be stopped.
 .PP
 Return codes:
-.br
 0 is returned if the call completed successfully. If not, -1 is returned and errno is set to one of the following:
 .nf
 EINVAL		lockspace was not a valid lockspace handle
--- cluster/dlm/man/dlm_lock.3	2007/08/03 10:27:53	1.3.2.1
+++ cluster/dlm/man/dlm_lock.3	2007/08/22 14:15:22	1.3.2.2
@@ -6,61 +6,61 @@
  #include <libdlm.h>
 
 int dlm_lock(uint32_t mode,
-	     struct dlm_lksb *lksb,	
-	     uint32_t flags,	
-	     const void *name,	
-	     unsigned int namelen,
-	     uint32_t parent,
-	     void (*astaddr) (void *astarg),
-	     void *astarg,
-	     void (*bastaddr) (void *astarg),
-	     struct dlm_range *range);
+		struct dlm_lksb *lksb,	
+		uint32_t flags,	
+		const void *name,	
+		unsigned int namelen,
+		uint32_t parent,		/* unused */
+		void (*astaddr) (void *astarg),
+		void *astarg,
+		void (*bastaddr) (void *astarg),
+		void *range);			/* unused */
 
 int dlm_lock_wait(uint32_t mode,
-                  struct dlm_lksb *lksb,
-                  uint32_t flags,
-                  const void *name,
-                  unsigned int namelen,
-                  uint32_t parent,
-                  void *bastarg,
-                  void (*bastaddr) (void *bastarg),
-                  void *range);
+		struct dlm_lksb *lksb,
+		uint32_t flags,
+		const void *name,
+		unsigned int namelen,
+		uint32_t parent,		/* unused */
+		void *bastarg,
+		void (*bastaddr) (void *bastarg),
+		void *range);			/* unused */
 
 int dlm_ls_lock(dlm_lshandle_t lockspace,
-                uint32_t mode,
-                struct dlm_lksb *lksb,
-                uint32_t flags,
-                const void *name,
-                unsigned int namelen,
-                uint32_t parent,
-                void (*astaddr) (void *astarg),
-                void *astarg,
-                void (*bastaddr) (void *astarg),
-                void *range);
+		uint32_t mode,
+		struct dlm_lksb *lksb,
+		uint32_t flags,
+		const void *name,
+		unsigned int namelen,
+		uint32_t parent,		/* unused */
+		void (*astaddr) (void *astarg),
+		void *astarg,
+		void (*bastaddr) (void *astarg),
+		void *range);			/* unused */
 
 int dlm_ls_lock_wait(dlm_lshandle_t lockspace,
-                     uint32_t mode,
-                     struct dlm_lksb *lksb,
-                     uint32_t flags,
-                     const void *name,
-                     unsigned int namelen,
-                     uint32_t parent,
-                     void *bastarg,
-                     void (*bastaddr) (void *bastarg),
-                     void *range);
+		uint32_t mode,
+		struct dlm_lksb *lksb,
+		uint32_t flags,
+		const void *name,
+		unsigned int namelen,
+		uint32_t parent,		/* unusued */
+		void *bastarg,
+		void (*bastaddr) (void *bastarg),
+		void *range);			/* unused */
 
 int dlm_ls_lockx(dlm_lshandle_t lockspace,
-                 uint32_t mode,
-                 struct dlm_lksb *lksb,
-                 uint32_t flags,
-                 const void *name,
-                 unsigned int namelen,
-                 uint32_t parent,
-                 void (*astaddr) (void *astarg),
-                 void *astarg,
-                 void (*bastaddr) (void *astarg),
-                 uint64_t *xid,
-                 uint64_t *timeout);
+		uint32_t mode,
+		struct dlm_lksb *lksb,
+		uint32_t flags,
+		const void *name,
+		unsigned int namelen,
+		uint32_t parent,		/* unused */
+		(*astaddr) (void *astarg),
+		void *astarg,
+		void (*bastaddr) (void *astarg),
+		uint64_t *xid,
+		uint64_t *timeout);
 
 
 
@@ -98,18 +98,29 @@
 .B flags
 Affect the operation of the lock call:
 .nf
-  LKF_NOQUEUE     Don't queue the lock. If it cannot be granted return -EAGAIN
+  LKF_NOQUEUE     Don't queue the lock. If it cannot be granted return
+                  -EAGAIN
   LKF_CONVERT     Convert an existing lock
   LKF_VALBLK      Lock has a value block
   LKF_QUECVT      Put conversion to the back of the queue
-  LKF_EXPEDITE    Grant a NL lock immediately regardless of other locks on the conversion queue
-  LKF_PERSISTENT  Specifies a lock that will not be unlocked when the process exits.
-  LKF_CONVDEADLK  Enable conversion deadlock
-  LKF_NODLCKWT    Do not consider this lock when trying to detect deadlock conditions
-  LKF_NODLCKBLK   Do not consider this lock as blocking other locks when trying to detect deadlock conditions.
+  LKF_EXPEDITE    Grant a NL lock immediately regardless of other locks
+                  on the conversion queue
+  LKF_PERSISTENT  Specifies a lock that will not be unlocked when the
+                  process exits; it will become an orphan lock.
+  LKF_CONVDEADLK  Enable internal conversion deadlock resolution where
+                  the lock's granted mode may be set to NL and
+                  DLM_SBF_DEMOTED is returned in lksb.sb_flags.
+  LKF_NODLCKWT    Do not consider this lock when trying to detect
+                  deadlock conditions.
+  LKF_NODLCKBLK   Not implemented
   LKF_NOQUEUEBAST Send blocking ASTs even for NOQUEUE operations
   LKF_HEADQUE     Add locks to the head of the convert or waiting queue
-  LKF_NOORDER     Avoid the VMS rules on grant order when using range locks
+  LKF_NOORDER     Avoid the VMS rules on grant order
+  LKF_ALTPR       If the requested mode can't be granted (generally CW),
+                  try to grant in PR and return DLM_SBF_ALTMODE.
+  LKF_ALTCW       If the requested mode can't be granted (generally PR),
+                  try to grant in CW and return DLM_SBF_ALTMODE.
+  LKF_TIMEOUT     The lock will time out per the timeout arg.
 
 .fi
 .PP
@@ -123,7 +134,7 @@
 .B name
 .br
 Name of the lock. Can be binary, max 64 bytes. Ignored for lock
-conversions.
+conversions.  (Should be a string to work with debugging tools.)
 .PP
 .B namelen	
 .br
@@ -152,18 +163,11 @@
 .PP
 .B range
 .br
-an optional structure of two uint64_t that indicate the range
-of the lock. Locks with overlapping ranges will be granted only
-if the lock modes are compatible. locks with non-overlapping
-ranges (on the same resource) do not conflict. A lock with no
-range is assumed to have a range encompassing the largest
-possible range. ie. 0-0xFFFFFFFFFFFFFFFF.  Note that is is more
-efficient to specify no range than to specify the full range
-above.
+This is unused.
 .PP
 .B xid
 .br
-Don't know what this does...Dave!???
+Optional transaction ID for deadlock detection.
 .PP
 .B timeout
 .br
@@ -171,18 +175,26 @@
 (usually because it is already blocked by another lock), then the AST 
 will trigger with ETIMEDOUT as the status. If the lock operation is a conversion
 then the lock will remain at its current status. If this is a new lock then
-the lock will not exist and any LKB in the lksb will be invalid.
+the lock will not exist and any LKB in the lksb will be invalid.  This is
+ignored without the LKF_TIMEOUT flag.
 .PP
 .SS Return values
 0 is returned if the call completed successfully. If not, -1 is returned and errno is set to one of the following:
 .PP
 .nf
-EINVAL		An invalid parameter was passed to the call (eg bad lock mode or flag)
-ENOMEM		A (kernel) memory allocation failed
-EAGAIN		LKF_NOQUEUE was requested and the lock could not be granted
-EBUSY		The lock is currently being locked or converted
-EFAULT		The userland buffer could not be read/written by the kernel (this indicates a library problem)
-EDEADLOCK	The lock operation is causing a deadlock and has been cancelled. If this was a conversion then the lock is reverted to its previously granted state. If it was a new lock then it has not been granted. (NB Only conversion deadlocks are currently detected)
+EINVAL          An invalid parameter was passed to the call (eg bad lock
+                mode or flag)
+ENOMEM          A (kernel) memory allocation failed
+EAGAIN          LKF_NOQUEUE was requested and the lock could not be
+                granted
+EBUSY           The lock is currently being locked or converted
+EFAULT          The userland buffer could not be read/written by the
+                kernel (this indicates a library problem)
+EDEADLOCK       The lock operation is causing a deadlock and has been
+                cancelled. If this was a conversion then the lock is
+                reverted to its previously granted state. If it was a
+                new lock then it has not been granted. (NB Only
+                conversion deadlocks are currently detected)
 .PP
 If an error is returned in the AST, then lksb.sb_status is set to the one of the above values instead of zero.
 .SS Structures
@@ -196,10 +208,6 @@
   char     sb_lvbptr; /* Optional pointer to lock value block */
 };
 
-struct dlm_range {
-  uint64_t ra_start;
-  uint64_t ra_end;
-};
 .fi
 .SH EXAMPLE
 .nf
--- cluster/dlm/man/dlm_unlock.3	2007/08/03 10:27:53	1.3.2.1
+++ cluster/dlm/man/dlm_unlock.3	2007/08/22 14:15:22	1.3.2.2
@@ -26,15 +26,16 @@
 .B flags
 flags affecting the unlock operation:
 .nf
-  LKF_CANCEL    Cancel a pending lock or conversion. 
-                This returns the lock to it's
-                previously granted mode (in case of a
-                conversion) or unlocks it (in case of a waiting lock).
-  LKF_IVVALBLK  Invalidate value block
+  LKF_CANCEL       Cancel a pending lock or conversion. 
+                   This returns the lock to it's previously
+                   granted mode (in case of a conversion) or
+                   unlocks it (in case of a waiting lock).
+  LKF_IVVALBLK     Invalidate value block
+  LKF_FORCEUNLOCK  Unlock the lock even if it's waiting.
 .fi
 .PP
 .B lksb
-LKSB to return status and value block information. 
+LKSB to return status and value block information.
 .PP
 .B astarg
 New parameter to be passed to the completion AST.
@@ -50,13 +51,17 @@
 0 is returned if the call completed successfully. If not, -1 is returned and errno is set to one of the following:
 .PP
 .nf
-EINVAL		An invalid parameter was passed to the call (eg bad lock mode or flag)
-EINPROGRESS	The lock is already being unlocked
-EBUSY		The lock is currently being locked or converted
-ENOTEMPTY	An attempt to made to unlock a parent lock that still has child locks.
-ECANCEL		A lock conversion was successfully cancelled
-EUNLOCK		An unlock operation completed successfully (sb_status only)
-EFAULT		The userland buffer could not be read/written by the kernel
+EINVAL          An invalid parameter was passed to the call (eg bad
+                lock mode or flag)
+EINPROGRESS     The lock is already being unlocked
+EBUSY           The lock is currently being locked or converted
+ENOTEMPTY       An attempt to made to unlock a parent lock that still has
+                child locks.
+ECANCEL         A lock conversion was successfully cancelled
+EUNLOCK         An unlock operation completed successfully
+                (sb_status only)
+EFAULT          The userland buffer could not be read/written by the
+                kernel
 .fi
 If an error is returned in the AST, then lksb.sb_status is set to the one of the above numbers instead of zero.
 .SH EXAMPLE
--- cluster/dlm/man/libdlm.3	2007/08/03 10:27:53	1.2.2.1
+++ cluster/dlm/man/libdlm.3	2007/08/22 14:15:22	1.2.2.2
@@ -21,7 +21,8 @@
 pthreads is the normal way of using the DLM. This way you simply initialise the DLM's thread and all the AST routines will be delivered in that thread. You just call the dlm_lock() etc routines in the main line of your program.
 .br
 If you don't want to use pthreads or you want to handle the dlm callback ASTs yourself then you can get an FD handle to the DLM device and call 
-.B dlm_dispatch() on it whenever it becomes active. That was ASTs will be delivered in the context of the thread/process that called 
+.B dlm_dispatch()
+on it whenever it becomes active. That was ASTs will be delivered in the context of the thread/process that called 
 .B dlm_dispatch().
 
 
--- cluster/fence/man/fence.8	2007/01/16 19:11:30	1.4.2.1
+++ cluster/fence/man/fence.8	2007/08/22 14:15:22	1.4.2.2
@@ -1,5 +1,5 @@
 .\"  Copyright (C) Sistina Software, Inc.  1997-2003  All rights reserved.
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 .\"  
 .\"  This copyrighted material is made available to anyone wishing to use,
 .\"  modify, copy, or redistribute it subject to the terms and conditions
@@ -28,46 +28,10 @@
 Manages fenced
 .TP
 fence_node
-Calls the fence agent specified in the configuration file
-
-.SS I/O Fencing agents
-
-.TP 20
-fence_apc
-for APC MasterSwitch and APC 79xx models
-.TP
-fence_bladecenter
-for IBM Bladecenters w/ telnet interface
-.TP
-fence_brocade
-for Brocade fibre channel switches (PortDisable)
-.TP
-fence_egenera
-for Egenera blades
-.TP
-fence_gnbd
-for GNBD-based GFS clusters
-.TP
-fence_ilo
-for HP ILO interfaces (formerly fence_rib)
-.TP
-fence_manual
-for manual intervention
-.TP
-fence_mcdata
-for McData fibre channel switches
-.TP
-fence_ack_manual
-for manual intervention
-.TP
-fence_sanbox2
-for Qlogic SAN Box fibre channel switches
-.TP
-fence_vixel
-for Vixel switches (PortDisable)
+Runs the fence agent configured (per cluster.conf) for the given node.
 .TP
-fence_wti
-for WTI Network Power Switch
+fence_*
+Fence agents run by fenced.
 
 .SH SEE ALSO
 gnbd(8), gfs(8)
--- cluster/fence/man/fence_tool.8	2006/11/28 18:00:49	1.6.2.1
+++ cluster/fence/man/fence_tool.8	2007/08/22 14:15:22	1.6.2.2
@@ -1,5 +1,5 @@
 .\"  Copyright (C) Sistina Software, Inc.  1997-2003  All rights reserved.
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 .\"  
 .\"  This copyrighted material is made available to anyone wishing to use,
 .\"  modify, copy, or redistribute it subject to the terms and conditions
@@ -13,34 +13,22 @@
 .SH SYNOPSIS
 .B
 fence_tool
-<\fBjoin | leave | wait\fP> 
+<\fBjoin | leave | dump\fP> 
 [\fIOPTION\fR]...
 
 .SH DESCRIPTION
 \fBfence_tool\fP is a program used to join or leave the default fence
-domain.  Specifically, it starts the fence daemon (fenced) to join the
-domain and kills fenced to leave the domain.  Fenced can be started
-and stopped directly without using this program, but fence_tool takes
-some added steps that are often helpful.
-
-Before joining or leaving the fence domain, fence_tool waits for the
-cluster be in a quorate state.  The user can cancel fence_tool while it's
-waiting for quorum.  It's generally nicer to block waiting for quorum here
-than to have the fence daemon itself waiting to join or leave the domain
-while the cluster is inquorate.
-
-Since \fBfence_tool join\fP is the usual way of starting fenced, the
-fenced options -j, -f, and -c can also be passed to fence_tool which
-passes them on to fenced.
+domain.  It communicates with the fenced daemon.  Before telling fenced
+to join the domain, fence_tool waits for the cluster to have quorum,
+making it easier to cancel the command if the cluster is inquorate.
 
-A node must not leave the fence domain (fenced must not be terminated)
-while CLVM or GFS are in use.
+The dump option will read fenced's ring buffer of debug messages and print
+it to stdout.
 
 .SH OPTIONS
 .TP
 \fB-w\fP
-Wait until the join is completed.  "fence_tool join -w" is
-equivalent to "fence_tool join; fence_tool wait"
+Wait until the join or leave is completed.
 .TP
 \fB-h\fP
 Help.  Print out the usage syntax.
@@ -48,17 +36,11 @@
 \fB-V\fP
 Print version information.
 .TP
-\fB-j\fP \fIsecs\fP
-Post-join fencing delay (passed to fenced)
-.TP
-\fB-f\fP \fIsecs\fP
-Post-fail fencing delay (passed to fenced)
-.TP
-\fB-c\fP
-All nodes are in a clean state to start (passed to fenced)
-.TP
 \fB-t\fP
 Maximum time in seconds to wait (default: 300 seconds)
+.TP
+\fB-Q\fP
+Fail command immediately if the cluster is not quorate, don't wait.
 
 .SH SEE ALSO
 fenced(8), fence(8), fence_node(8)
--- cluster/fence/man/fenced.8	2005/02/16 14:24:10	1.3
+++ cluster/fence/man/fenced.8	2007/08/22 14:15:22	1.3.2.1
@@ -1,5 +1,5 @@
 .\"  Copyright (C) Sistina Software, Inc.  1997-2003  All rights reserved.
-.\"  Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
+.\"  Copyright (C) 2004-2007 Red Hat, Inc.  All rights reserved.
 .\"  
 .\"  This copyrighted material is made available to anyone wishing to use,
 .\"  modify, copy, or redistribute it subject to the terms and conditions
@@ -16,65 +16,87 @@
 [\fIOPTION\fR]...
 
 .SH DESCRIPTION
-The fencing daemon, \fBfenced\fP, should be run on every node that will
-use CLVM or GFS.  It should be started after the node has joined the CMAN
-cluster (fenced is only used with CMAN; it is not used with GULM/SLM/RLM.)
-A node that is not running \fBfenced\fP is not permitted to mount GFS file
-systems.
-
-All fencing daemons running in the cluster form a group called the "fence
-domain".  Any member of the fence domain that fails is fenced by a
-remaining domain member.  The actual fencing does not occur unless the
-cluster has quorum so if a node failure causes the loss of quorum, the
-failed node will not be fenced until quorum has been regained.  If a
-failed domain member (due to be fenced) rejoins the cluster prior to the
-actual fencing operation is carried out, the fencing operation is
-bypassed.
-
-The fencing daemon depends on CMAN for cluster membership information and
-it depends on CCS to provide cluster.conf information.  The fencing daemon
-calls fencing agents according to cluster.conf information.
+
+The fencing daemon, fenced, fences cluster nodes that have failed.
+Fencing a node generally means rebooting it or otherwise preventing it
+from writing to storage, e.g. disabling its port on a SAN switch.  Fencing
+involves interacting with a hardware device, e.g. network power switch,
+SAN switch, storage array.  Different "fencing agents" are run by fenced
+to interact with various hardware devices.
+
+Software related to sharing storage among nodes in a cluster, e.g. GFS,
+usually requires fencing to be configured to prevent corruption of the
+storage in the presence of node failure and recovery.  GFS will not allow
+a node to mount a GFS file system unless the node is running fenced.
+Fencing happens in the context of a cman/openais cluster.  A node must be
+a cluster member before it can run fenced.
+
+Once started, fenced waits for the 'fence_tool join' command to be run,
+telling it to join the fence domain: a group of nodes managed by the
+openais/cpg/groupd cluster infrastructure.  In most cases, all nodes will
+join the fence domain after joining the cluster.
+
+Fence domain members are aware of the membership of the group, and are
+notified when nodes join or leave.  If a fence domain member fails, one of
+the remaining members will fence it.  If the cluster has lost quorum,
+fencing won't occur until quorum has been regained.  If a failed node is
+reset and rejoins the cluster before the remaining domain members have
+fenced it, the fencing will be bypassed.
 
 .SS Node failure
 
-When a domain member fails, the actual fencing must be completed before
-GFS recovery can begin.  This means any delay in carrying out the fencing
-operation will also delay the completion of GFS file system operations;
-most file system operations will hang during this period.
+When a domain member fails, fenced runs an agent to fence it.  The
+specific agent to run and the parameters the agent requires are all read
+from the cluster.conf file (using libccs) at the time of fencing.  The
+fencing operation against a failed node is not considered complete until
+the exec'ed agent exits.  The exit value of the agent indicates the
+success or failure of the operation.  If the operation failed, fenced will
+retry (possibly with a different agent, depending on the configuration)
+until fencing succeeds.  Other systems such as DLM and GFS will not begin
+their own recovery for a failed node until fenced has successfully
+completed fencing it.  So, a delay or problem in fencing will result in
+other systems like DLM/GFS being blocked.  Information about fencing
+operations will appear in syslog.
 
 When a domain member fails, the actual fencing operation can be delayed by
-a configurable number of seconds (post_fail_delay or -f).  Within this
-time the failed node can rejoin the cluster to avoid being fenced.  This
-delay is 0 by default to minimize the time that applications using GFS are
-stalled by recovery.  A delay of -1 causes the fence daemon to wait
-indefinitely for the failed node to rejoin the cluster.  In this case the
-node is not fenced and all recovery must wait until the failed node
-rejoins the cluster.
+a configurable number of seconds (cluster.conf:post_fail_delay or -f).
+Within this time, the failed node could be reset and rejoin the cluster to
+avoid being fenced.  This delay is 0 by default to minimize the time that
+other systems are blocked (see above).
 
 .SS Domain startup
 
 When the domain is first created in the cluster (by the first node to join
 it) and subsequently enabled (by the cluster gaining quorum) any nodes
-listed in cluster.conf that are not presently members of the CMAN cluster
-are fenced.  The status of these nodes is unknown and to be on the side of
-safety they are assumed to be in need of fencing.  This startup fencing
-can be disabled; but it's only truely safe to do so if an operator is
+listed in cluster.conf that are not presently members of the cman cluster
+are fenced.  The status of these nodes is unknown, and to be on the side
+of safety they are assumed to be in need of fencing.  This startup fencing
+can be disabled, but it's only truely safe to do so if an operator is
 present to verify that no cluster nodes are in need of fencing.
-(Dangerous nodes that need to be fenced are those that had gfs mounted,
-did not cleanly unmount, and are now either hung or unable to communicate
-with other nodes over the network.)
+
+This example illustrates why startup fencing is important.  Take a three
+node cluster with nodes A, B and C; all three have a GFS fs mounted.  All
+three nodes experience a low-level kernel hang at about the same time.  A
+watchdog triggers a reboot on nodes A and B, but not C.  A and B boot back
+up, form the cluster again, gain quorum, join the fence domain, *don't*
+fence node C which is still hung and unresponsive, and mount the GFS fs
+again.  If C were to come back to life, it could corrupt the fs.  So, A
+and B need to fence C when they reform the fence domain since they don't
+know the state of C.  If C *had* been reset by a watchdog like A and B,
+but was just slow in rebooting, then A and B might be fencing C
+unnecessarily when they do startup fencing.
 
 The first way to avoid fencing nodes unnecessarily on startup is to ensure
 that all nodes have joined the cluster before any of the nodes start the
 fence daemon.  This method is difficult to automate.
 
 A second way to avoid fencing nodes unnecessarily on startup is using the
-post_join_delay parameter (or -j option).  This is the number of seconds
-the fence daemon will delay before actually fencing any victims after
-nodes join the domain.  This delay will give any nodes that have been
-tagged for fencing the chance to join the cluster and avoid being fenced.
-A delay of -1 here will cause the daemon to wait indefinitely for all
-nodes to join the cluster and no nodes will actually be fenced on startup.
+cluster.conf:post_join_delay setting (or -j option).  This is the number
+of seconds fenced will delay before actually fencing any victims after
+nodes join the domain.  This delay gives nodes that have been tagged for
+fencing a chance to join the cluster and avoid being fenced.  A delay of
+-1 here will cause the daemon to wait indefinitely for all nodes to join
+the cluster and no nodes will actually be fenced on startup.
 
 To disable fencing at domain-creation time entirely, the -c option can be
 used to declare that all nodes are in a clean or safe state to start.  The
@@ -86,6 +108,13 @@
 are fenced by power cycling.  If nodes are fenced by disabling their SAN
 access, then unnecessarily fencing a node is usually less disruptive.
 
+.SS Fencing override
+
+If a fencing device fails, the agent may repeatedly return errors as
+fenced tries to fence a failed node.  In this case, the admin can manually
+reset the failed node, and then use fence_ack_manual to tell fenced to
+continue without fencing the node.
+
 .SH CONFIGURATION FILE
 Fencing daemon behavior can be controlled by setting options in the
 cluster.conf file under the section <fence_daemon> </fence_daemon>.  See
@@ -96,21 +125,140 @@
 Post-join delay is the number of seconds the daemon will wait before
 fencing any victims after a node joins the domain.
 
-  <fence_daemon post_join_delay="3">
-  </fence_daemon>
+  <fence_daemon post_join_delay="6"/>
 
 Post-fail delay is the number of seconds the daemon will wait before
 fencing any victims after a domain member fails.
 
-  <fence_daemon post_fail_delay="0">
-  </fence_daemon>
+  <fence_daemon post_fail_delay="0"/>
 
 Clean-start is used to prevent any startup fencing the daemon might do.
 It indicates that the daemon should assume all nodes are in a clean state
 to start.
 
-  <fence_daemon clean_start="0">
-  </fence_daemon>
+  <fence_daemon clean_start="0"/>
+
+Override-path is the location of a FIFO used for communication between
+fenced and fence_ack_manual.
+
+  <fence_daemon override_path="/var/run/cluster/fenced_override"/>
+
+.SS Per-node fencing settings
+
+The per-node fencing configuration can become complex and is largely
+specific to the hardware being used.  The general framework begins like
+this:
+
+  <clusternodes>
+
+  <clusternode name="node1" nodeid="1">
+          <fence>
+          </fence>
+  </clusternode>
+
+  <clusternode name="node2" nodeid="2">
+          <fence>
+          </fence>
+  </clusternode>
+
+  ...
+  </clusternodes>
+
+The simple fragment above is a valid configuration: there is no way to
+fence these nodes.  If one of these nodes is in the fence domain and
+fails, fenced will repeatedly fail in its attempts to fence it.  The admin
+will need to manually reset the failed node and then use fence_ack_manual
+to tell fenced to continue on without fencing it (see override above).
+
+There is typically a single method used to fence each node (the name given
+to the method is not significant).  A method refers to a specific device
+listed in the separate <fencedevices> section, and then lists any
+node-specific parameters related to using the device.
+
+  <clusternodes>
+
+  <clusternode name="node1" nodeid="1">
+          <fence>
+             <method name="single">
+                <device name="myswitch" hw-specific-param="x"/>
+             </method>
+          </fence>
+  </clusternode>
+
+  <clusternode name="node2" nodeid="2">
+          <fence>
+             <method name="single">
+                <device name="myswitch" hw-specific-param="y"/>
+             </method>
+          </fence>
+  </clusternode>
+
+  ...
+  </clusternodes>
+
+.SS Fence device settings
+
+This section defines properties of the devices used to fence nodes.  There
+may be one or more devices listed.  The per-node fencing sections above
+reference one of these fence devices by name.
+
+  <fencedevices>
+          <fencedevice name="myswitch" ipaddr="1.2.3.4" .../>
+  </fencedevices>
+
+.SS Multiple methods for a node
+
+In more advanced configurations, multiple fencing methods can be defined
+for a node.  If fencing fails using the first method, fenced will try the
+next method, and continue to cycle through methods until one succeeds.
+
+  <clusternode name="node1" nodeid="1">
+          <fence>
+             <method name="first">
+                <device name="powerswitch" hw-specific-param="x"/>
+             </method>
+
+             <method name="second">
+                <device name="storageswitch" hw-specific-param="1"/>
+             </method>
+          </fence>
+  </clusternode>
+
+.SS Dual path, redundant power
+
+Sometimes fencing a node requires disabling two power ports or two i/o
+paths.  This is done by specifying two or more devices within a method.
+
+  <clusternode name="node1" nodeid="1">
+          <fence>
+             <method name="single">
+                <device name="sanswitch1" hw-specific-param="x"/>
+                <device name="sanswitch2" hw-specific-param="x"/>
+             </method>
+          </fence>
+  </clusternode>
+
+When using power switches to fence nodes with dual power supplies, the
+agents must be told to turn off both power ports before restoring power to
+either port.  The default off-on behavior of the agent could result in the
+power never being fully disabled to the node.
+
+  <clusternode name="node1" nodeid="1">
+          <fence>
+             <method name="single">
+                <device name="nps1" hw-param="x" action="off"/>
+                <device name="nps2" hw-param="x" action="off"/>
+                <device name="nps1" hw-param="x" action="on"/>
+                <device name="nps2" hw-param="x" action="on"/>
+             </method>
+          </fence>
+  </clusternode>
+
+.SS Hardware-specific settings
+
+Find documentation for configuring specific devices at
+.BR
+http://sources.redhat.com/cluster/
 
 .SH OPTIONS
 Command line options override corresonding values in cluster.conf.
@@ -124,18 +272,22 @@
 \fB-c\fP 
 All nodes are in a clean state to start.
 .TP
+\fB-O\fP
+Path of the override fifo.
+.TP
 \fB-D\fP
 Enable debugging code and don't fork into the background.
 .TP
-\fB-n\fP \fIname\fP
-Name of the fence domain, "default" if none.
-.TP
 \fB-V\fP
 Print the version information and exit.
 .TP
 \fB-h\fP 
 Print out a help message describing available options, then exit.
 
+.SH DEBUGGING
+The fenced daemon keeps a circular buffer of debug messages that can be
+dumped with the 'fence_tool dump' command.
+
 .SH SEE ALSO
-gfs(8), fence(8)
+fence_tool(8), cman(8), groupd(8), group_tool(8)
 
--- cluster/group/Makefile	2006/08/11 15:18:14	1.7
+++ cluster/group/Makefile	2007/08/22 14:15:22	1.7.2.1
@@ -30,6 +30,7 @@
 	${MAKE} -C tool install
 	${MAKE} -C dlm_controld install
 	${MAKE} -C gfs_controld install
+	${MAKE} -C man install
 
 distclean: clean
 	rm -f make/defines.mk
/cvs/cluster/cluster/group/man/Makefile,v  -->  standard output
revision 1.1.2.1
--- cluster/group/man/Makefile
+++ -	2007-08-22 14:15:25.233311000 +0000
@@ -0,0 +1,30 @@
+###############################################################################
+###############################################################################
+##
+##  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+##  
+##  This copyrighted material is made available to anyone wishing to use,
+##  modify, copy, or redistribute it subject to the terms and conditions
+##  of the GNU General Public License v.2.
+##
+###############################################################################
+###############################################################################
+
+TARGET8= \
+	groupd.8 \
+	group_tool.8 \
+	dlm_controld.8 \
+	gfs_controld.8
+
+UNINSTALL=${top_srcdir}/scripts/uninstall.pl
+
+top_srcdir=..
+
+include ${top_srcdir}/make/defines.mk
+
+install:
+	install -d ${mandir}/man8
+	install ${TARGET8} ${mandir}/man8
+
+uninstall:
+	${UNINSTALL} ${TARGET8} ${mandir}/man8
/cvs/cluster/cluster/group/man/dlm_controld.8,v  -->  standard output
revision 1.1.2.1
--- cluster/group/man/dlm_controld.8
+++ -	2007-08-22 14:15:25.332352000 +0000
@@ -0,0 +1,129 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH dlm_controld 8
+
+.SH NAME
+dlm_controld - daemon that configures dlm according to cluster events
+
+.SH SYNOPSIS
+.B
+dlm_controld
+[\fIOPTION\fR]...
+
+.SH DESCRIPTION
+The dlm lives in the kernel, and the cluster infrastructure (cluster
+membership and group management) lives in user space.  The dlm in the
+kernel needs to adjust/recover for certain cluster events.  It's the job
+of dlm_controld to receive these events and reconfigure the kernel dlm as
+needed.  dlm_controld controls and configures the dlm through sysfs and
+configfs files that are considered dlm-internal interfaces; not a general
+API/ABI.
+
+The dlm also exports lock state through debugfs so that dlm_controld can
+implement deadlock detection in user space.
+
+.SH CONFIGURATION FILE
+
+Optional cluster.conf settings are placed in the <dlm> section.
+
+.SS Global settings
+The network
+.I protocol
+can be set to "tcp" or "sctp".  The default is tcp.
+
+  <dlm protocol="tcp"/>
+
+After waiting
+.I timewarn
+centiseconds, the dlm will emit a warning via netlink.  This only applies
+to lockspaces created with the DLM_LSFL_TIMEWARN flag, and is used for
+deadlock detection.  The default is 500 (5 seconds).
+
+  <dlm timewarn="500"/>
+
+DLM kernel debug messages can be enabled by setting
+.I log_debug
+to 1.  The default is 0.
+
+  <dlm log_debug="0"/>
+
+.SS Disabling resource directory
+
+Lockspaces usually use a resource directory to keep track of which node is
+the master of each resource.  The dlm can operate without the resource
+directory, though, by statically assigning the master of a resource using
+a hash of the resource name.
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+  </dlm>
+
+.SS Lock-server configuration
+
+The nodir setting can be combined with node weights to create a
+configuration where select node(s) are the master of all resources/locks.
+These "master" nodes can be viewed as "lock servers" for the other nodes.
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+      <master name="node01"/>
+    </lockspace>
+  </dlm>
+
+or,
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+      <master name="node01"/>
+      <master name="node02"/>
+    </lockspace>
+  </dlm>
+
+Lock management will be partitioned among the available masters.  There
+can be any number of masters defined.  The designated master nodes will
+master all resources/locks (according to the resource name hash).  When no
+masters are members of the lockspace, then the nodes revert to the common
+fully-distributed configuration.  Recovery is faster, with little
+disruption, when a non-master node joins/leaves.
+
+There is no special mode in the dlm for this lock server configuration,
+it's just a natural consequence of combining the "nodir" option with node
+weights.  When a lockspace has master nodes defined, the master has a
+default weight of 1 and all non-master nodes have weight of 0.  Explicit
+non-zero weights can also be assigned to master nodes, e.g.
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+      <master name="node01" weight="2"/>
+      <master name="node02" weight="1"/>
+    </lockspace>
+  </dlm>
+
+In which case node01 will master 2/3 of the total resources and node2 will
+master the other 1/3.
+
+
+.SH OPTIONS
+.TP
+\fB-d\fP <num>
+Enable (1) or disable (0) the deadlock detection code.
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-K\fP
+Enable kernel dlm debugging messages.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH SEE ALSO
+groupd(8)
+
/cvs/cluster/cluster/group/man/gfs_controld.8,v  -->  standard output
revision 1.1.2.1
--- cluster/group/man/gfs_controld.8
+++ -	2007-08-22 14:15:25.432625000 +0000
@@ -0,0 +1,70 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH gfs_controld 8
+
+.SH NAME
+gfs_controld - daemon that manages mounting, unmounting, recovery and
+posix locks
+
+.SH SYNOPSIS
+.B
+gfs_controld
+[\fIOPTION\fR]...
+
+.SH DESCRIPTION
+GFS lives in the kernel, and the cluster infrastructure (cluster
+membership and group management) lives in user space.  GFS in the kernel
+needs to adjust/recover for certain cluster events.  It's the job of
+gfs_controld to receive these events and reconfigure gfs as needed.
+gfs_controld controls and configures gfs through sysfs files that are
+considered gfs-internal interfaces; not a general API/ABI.
+
+Mounting, unmounting and node failure are the main cluster events that
+gfs_controld controls.  It also manages the assignment of journals to
+different nodes.  The mount.gfs and umount.gfs programs communicate with
+gfs_controld to join/leave the mount group and receive the necessary
+options for the kernel mount.
+
+GFS also sends all posix lock operations to gfs_controld for processing.
+gfs_controld manages cluster-wide posix locks for gfs and passes results
+back to gfs in the kernel.
+
+.SH OPTIONS
+.TP
+\fB-l\fP <num>
+Limit the rate at which posix lock messages are sent to <num> messages per
+second.  0 disables the limit and results in the maximum performance of
+posix locks.  Default is 100.
+.TP
+\fB-w\fP
+Disable the "withdraw" feature.
+.TP
+\fB-p\fP
+Disable posix lock handling.
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-P\fP
+Enable posix lock debugging messages.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH DEBUGGING 
+The gfs_controld daemon keeps a circular buffer of debug messages that can
+be dumped with the 'group_tool dump gfs' command.
+
+The state of all gfs posix locks can also be dumped from gfs_controld with
+the 'group_tool dump plocks <fsname>' command.
+
+.SH SEE ALSO
+groupd(8), group_tool(8)
+
/cvs/cluster/cluster/group/man/group_tool.8,v  -->  standard output
revision 1.1.2.1
--- cluster/group/man/group_tool.8
+++ -	2007-08-22 14:15:25.539085000 +0000
@@ -0,0 +1,67 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH group_tool 8
+
+.SH NAME
+group_tool - display/dump information about fence, dlm and gfs groups
+
+.SH SYNOPSIS
+.B
+group_tool
+[\fISUBCOMMAND\fR] [\fIOPTION\fR]...
+
+.SH DESCRIPTION
+
+The group_tool program displays the status of fence, dlm and gfs groups.
+The information is read from the groupd daemon which controls the fenced,
+dlm_controld and gfs_controld daemons.  group_tool will also dump debug
+logs from various daemons.
+
+.SH SUBCOMMANDS
+
+.TP
+\fBls\fP
+displays the list of groups and their membership.  It is the default
+subcommand if none is specified.
+
+.TP
+\fBdump\fP
+dumps the debug log from groupd.
+
+.TP
+\fBdump fence\fP
+dumps the debug log from fenced.
+
+.TP
+\fBdump gfs\fP
+dumps the debug log from gfs_controld.
+
+.TP
+\fBdump plocks\fP <fsname>
+prints the posix locks on the named gfs fs from gfs_controld.
+
+.SH OPTIONS
+.TP
+\fB-v\fP
+Verbose output, used with the 'ls' subcommand.
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH DEBUGGING
+The groupd daemon keeps a circular buffer of debug messages that can be
+dumped with the 'group_tool dump' command.
+
+.SH SEE ALSO
+groupd(8)
+
/cvs/cluster/cluster/group/man/groupd.8,v  -->  standard output
revision 1.1.2.1
--- cluster/group/man/groupd.8
+++ -	2007-08-22 14:15:25.646162000 +0000
@@ -0,0 +1,49 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH groupd 8
+
+.SH NAME
+groupd - the group manager for fenced, dlm_controld and gfs_controld
+
+.SH SYNOPSIS
+.B
+groupd
+[\fIOPTION\fR]...
+
+.SH DESCRIPTION
+
+The group daemon, groupd, provides a compatibility layer between the
+openais closed process group (CPG) service and the fenced, dlm_controld
+and gfs_controld daemons.  groupd and its associated libgroup interface
+will go away in the future as the fencing, dlm and gfs daemons are ported
+to use the libcpg interfaces directly.  groupd translates and buffers cpg
+events between openais's cpg service and the fence/dlm/gfs systems that
+use it.  CPG's are used to represent the membership of the fence domain,
+dlm lockspaces and gfs mount groups.
+
+groupd is also a convenient place to query the status of the fence, dlm
+and gfs groups.  This is done by the group_tool program.
+
+
+.SH OPTIONS
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH DEBUGGING
+The groupd daemon keeps a circular buffer of debug messages that can be
+dumped with the 'group_tool dump' command.
+
+.SH SEE ALSO
+group_tool(8)
+


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]