[Cluster-devel] conga/luci/docs user_manual.html

jparsons at sourceware.org jparsons at sourceware.org
Mon Jan 15 16:00:48 UTC 2007


CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	jparsons at sourceware.org	2007-01-15 16:00:48

Modified files:
	luci/docs      : user_manual.html 

Log message:
	user manual edits - thanks, paul k

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/user_manual.html.diff?cvsroot=cluster&r1=1.9&r2=1.10

--- conga/luci/docs/user_manual.html	2006/11/03 21:47:27	1.9
+++ conga/luci/docs/user_manual.html	2007/01/15 16:00:48	1.10
@@ -6,83 +6,179 @@
   <h1>Conga User Manual</h1> All you need to know to get Conga up and running
   <h2>Introduction</h2>
    <h3>Conga Architecture</h3>
-  Conga is an agent/server architecture for remote administration of systems. The agent component is called 'ricci', and the server is called luci. One luci server can communicate with many multiple ricci agents installed on systems.
-  When a system is added to a luci server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA, but in fact, CA integration is not complete in version #1 of conga). Through the UI provided by luci, users can configure and administer storage and cluster behavior on remote systems. Communication between luci and ricci is done via XML.
+
+  Conga is an agent/server architecture for remote administration of
+  systems. The agent component is called "ricci", and the server is called
+  "luci". One luci server can communicate with many multiple ricci agents
+  installed on systems.  When a system is added to a luci server to be
+  administered, authentication is done once. No authentication is necessary from
+  then on (unless the certificate used is revoked by a CA, but in fact, CA
+  integration is not complete in version #1 of conga). Through the UI provided
+  by luci, users can configure and administer storage and cluster behavior on
+  remote systems. Communication between luci and ricci is done via XML.
+
    <h3>Luci Description</h3>
-    As stated above, systems to be administered are 'added' to a luci server (in the documentation that follows, the term 'registered' is also used to mean that a system has been added to a luci server to administered remotely). This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier upgrade and testing path.
-  <p/>
-  Every luci server instance has one user at initial installation time. This user is called 'admin'. Only the admin user may add systems to a luci server. The admin user can also create additional user accounts and determine which users are allowed to access which systems in the luci server database. It is possible to import users as a batch operation in a new luci server, just as it is possible to import systems.
-    <h4>Installation of Luci</h4> 
-    After the necessary luci RPMs are installed, the server can be started with the command "service luci start". The first time the server is started, a couple of events take place. The first is that the server is initialized by generating https SSL certificates for the server. An initial password for the admin user is generated as a random value. The admin password can be set any time by running the /usr/sbin/luci_admin application and specifying 'password' on the command line. luci_admin can be run before luci is started for the first time to set up an initial password for the admin account. Other utilities available from luci_admin are:
+
+    As stated above, systems to be administered are "added" to a luci server (in
+  the documentation that follows, the term "registered" is also used to mean
+  that a system has been added to a luci server to be administered remotely). This
+  is done by storing the hostname (FQDN) or IP address of the system in the luci
+  database. When a luci server is first installed, the database is empty. It is
+  possible, however, to import part or all of a systems database from an
+  existing luci server when deploying a new luci server. This capability
+  provides a means for replication of a luci server instance, as well as an
+  easier upgrade and testing path.  <p/>
+
+  Every luci server instance has one user at initial installation time. This
+  user is called "admin". Only the admin user may add systems to a luci
+  server. The admin user can also create additional user accounts and determine
+  which users are allowed to access which systems in the luci server
+  database. It is possible to import users as a batch operation in a new luci
+  server, just as it is possible to import systems.
+
+    <h4>Installation of Luci</h4> After the luci RPMs are installed, the server
+    can be started with the command "service luci start". The first time the
+    server is started, it is initialized by generating https SSL certificates
+    for that server and an initial password for the admin user is generated as a
+    random value. The admin password can be set any time by running the
+    /usr/sbin/luci_admin application and specifying "password" on the command
+    line. luci_admin can be run before luci is started for the first time to set
+    up an initial password for the admin account. Other utilities available from
+    luci_admin are:
+
   <ul><li>backup:  This option backs the luci server up to a file.</li>
       <li>restore: This restores a luci site from a backup file.</li>
       <li>init: This option regenerates ssl certs.</li>
       <li>help: Shows usage</li>
   </ul>
     <h4>Logging In</h4>
-    With the luci service running and an admin password set up, the next step is to log in to the server. Remember to specify https in the browser. Port 8084 is the default port for luci, but this value can be easily changed in /etc/sysconfig/luci.
+    With the luci service running and an admin password set up, the next step is
+      to log in to the server. Remember to specify https in the browser. Port
+      8084 is the default port for luci, but this value can be easily changed in
+      /etc/sysconfig/luci.
   <br/>
   Typical URL: https://hostname.org:8084/luci
   <p/>
-  Here is a screenshot of the luci login page.<br/>
+  Here is a screen shot of the luci login page.<br/>
   <img src="./ss_login1.png"/>
   <br/>
   <b>Figure #1: Login Page</b>
   <p/>
   <p/>
-  Enter admin as the user name, and then enter the admin password that has been set up in the appropriate field, then click 'log in'.
+  Enter admin as the user name, and then enter the admin password that has been
+  set up in the appropriate field, then click "log in". 
     <h4>UI Organization</h4>
-    luci is set up with three tabs right now. They are:
-    <ul><li>Homebase: This is where admin tools for adding and deleting systems or users are located. Only admin is allowed access to this tab.</li>
-        <li>Cluster: If any clusters are set up with the luci server, they will show up in a list in this tab. If a user other than admin navigates to the cluster tab, only those clusters that the user has permission to manage show up in the cluster list. The cluster tab provides a means for creating and configuring clusters.</li>
-       <li>Storage: Remote administration of storage is available through this page in the luci site.</li>
+    luci is set up with three tabs:
+    <ul><li>Homebase: This is where admin tools for adding and deleting systems
+  or users are located. Only admin is allowed access to this tab.</li> 
+        <li>Cluster: If any clusters are set up with the luci server, they will
+  show up in a list in this tab. If a user other than admin navigates to the
+  cluster tab, only those clusters that the user has permission to manage show
+  up in the cluster list. The cluster tab provides a means for creating and
+  configuring clusters.</li> 
+       <li>Storage: Remote administration of storage is available through this
+  page in the luci site.</li> 
     </ul>
   <h2>Homebase Tab</h2>
   The following figure shows the entry look of the Homebase tab.<br/>
-  <img src="./ss_homebase1.png"/>
+  <img src="./ss_homebase1.png"/> 
   <br/>
   <b>Figure #2: Homebase Tab</b>
   <p/>
   <p/>
-  With no systems registered with a luci server, the homebase page provides 3 initial utilities to the admin:
+  With no systems registered with a luci server, the Homebase page provides three
+  initial utilities to the admin: 
   <ul><li>Add a System</li>
       <li>Add an Existing Cluster</li>
       <li>Add a User</li>
   </ul>
-  After systems have been added to a luci server, the following link become available in the navigation table:
-  <ul><li>Manage Systems</li>
-  </ul>
+
+  After systems have been added to a luci server, the Manage Systems link become
+  available in the navigation table.<p/>
+  
   After users have been added to a luci server, the following links become available in the navigation table:
   <ul>
       <li>User Permissions</li>
       <li>Delete User</li>
   </ul>
 
-  <h4>Add a System:</h4> Adding a single system to luci in this first release makes the system available for remote storage administration. In addition to storage administration, conga also provides remote package retrieval and installation, chkconfig functionality, full remote cluster administration, and module support to filter and retrieve log entries. The storage and cluster UIs use some of this broad functionality, but at this time UI has not been built for all that conga will do remotely. <p/>
-  To add a system, click on the 'Add a System' link in the left hand nav table. This will load the following page:
+  <h4>Add a System:</h4> Adding a single system to luci makes the system
+  available for remote storage administration. In addition to 
+  storage administration, Conga provides remote package retrieval and
+  installation, the chkconfig function, full remote cluster administration, and
+  module support to filter and retrieve log entries. <p/> 
+  To add a system, click on the Add a System link in the left hand navigation
+  table. This will load the following page: 
   <img src="./ss_homebase2.png"/><br/>
   <b>Figure #3: Add a System</b>
   <p/>
   <p/>
-  The fully qualified domain name  OR IP Address of the system is entered in the System Hostname field. The root passsword for the system is entered in the adjacent field. As a convenience for adding multiple systems at once, and 'Add Another Entry' button is provided. Whhen this button is clicked and at least one additional entry row has been provided, a checkbox is also made available that can be selected if all systems specified for addition to the luci server share the same password.
+  The fully qualified domain name OR IP Address of the system is entered in the
+  System Hostname field. The root password for the system is entered in the
+  adjacent field. As a convenience for adding multiple systems at once, and Add
+  Another Entry button is provided. When this button is clicked and at least
+  one additional entry row has been provided, a checkbox is also made available
+  that can be selected if all systems specified for addition to the luci server
+  share the same password. 
   <img src="./ss_homebase3.png"/><br/>
   <b>Figure #4: Multiple System Entries</b>
   <p/>
   <p/>
-  If the System Hostname is left blank for any row, it is disregarded when the list of systems is submitted for addition. If systems in the list of rows do NOT share the same password (and the checkbox is, of course, left unchecked) and one ior more passwords are incorrect, an error message is generated for each system that has an incorrect password. Those systems listed with correct passwords are added to the luci server. Inn addition to incorrect password problems, an error message is also displayed if luci is unable to connect to the ricci agent on a system. Finally, is a system is entered on the form for addition and it is ALREADY being managed by the luci server, it is not added again - but the admin is informed via error message.<p/>
-  <h4>Add an Existing Cluster:</h4> This page looks much like the Add a System page, only one system may be listed. Any node in the cluster may bbe used for this entry.  Luci will contact tthe specified system and attempt to authenticate with the password provided. If successful, the complete list of cluster nodes will be returned, and a table will be populated with the node names and an adjacent field for a password for each node. The initial node that was entered appears in tthe list with its password field marked as 'authennticated'. There is a convenience checkbox if all nodes share the same password. NOTE: At this point, no cluster nodes have been added to luci - not even the initial node used to retrieve the cluster node list that successfully autthenticated. The cluster and subsequent nodes are only added after the entire list has been submitted with tthe submit button, and all nodes authenticate.  <p/>
-If any nodes fail to authenticate, they appear in the list in red font, so that the password can be corrected and the node list submittted aggain. Luci hhas a strict policy about addinng a cluster to be managed: A cluster cannot be added unless ALL nodes can be reached and authenticated.
-  <p/>When a cluster is added to a luci server, all nodes are also added as general systems so that storage may be managed on them. If this is not desired, the individual systems may be removed fromm luci, while remote cluster management capability is maintained.<p/>
-  Note: If an admin desires to create a new cluster, this capability is available on the Cluster tab. This task link is only for adding and managing clusters that already exist.<p/>
-  <h4>Add a User: </h4>Here the admin may add additional user accounts. The user name is entered along with an initial password.
+  If the System Hostname is left blank for any row, it is disregarded when the
+  list of systems is submitted for addition. If systems in the list of rows do
+  NOT share the same password (and the checkbox is, of course, left unchecked)
+  and one or more passwords are incorrect, an error message is generated for
+  each system that has an incorrect password. The systems listed with correct
+  passwords are added to the luci server. In addition to incorrect password
+  problems, an error message is also displayed if luci is unable to connect to
+  the ricci agent on a system. Finally, if a system is entered on the form for
+  addition and it is ALREADY being managed by the luci server, the system is not added
+  again (but, the administrator is informed via an error message).<p/>
+
+  <h4>Add an Existing Cluster:</h4> This page looks much like the Add a System
+  page, except that only one system may be listed. Any node in the cluster may
+  be used for this entry.  Luci will contact the specified system and attempt to
+  authenticate with the password provided. If successful, the complete list of
+  cluster nodes will be returned, and a table will be populated with the node
+  names and an adjacent password field for each node. The initial node
+  that was entered appears in the list with its password field marked as
+  "authenticated". There is a convenience checkbox if all nodes share the same
+  password. NOTE: At this point, no cluster nodes have been added to luci - not
+  even the initial node used to retrieve the cluster node list that successfully
+  authenticated. The cluster and subsequent nodes are only added after the
+  entire list has been submitted with the Submit button, and all nodes
+  authenticate.  <p/>
+
+If any nodes fail to authenticate, they appear in the list in red font, so that
+  the password can be corrected and the node list submitted again. Luci has a
+  strict policy about adding a cluster to be managed: A cluster cannot be added
+  unless ALL nodes can be reached and authenticated. 
+  <p/>When a cluster is added to a luci server, all nodes are also added as general systems so that storage may be managed on them. If this is not desired, the individual systems may be removed from luci, while remote cluster management capability is maintained.<p/>
+  NOTE: If an administrator desires to create a new cluster, this capability is
+  available on the Cluster tab. This task link is only for adding and managing
+  clusters that already exist.<p/> 
+  <h4>Add a User: </h4>Here the admin may add additional user accounts. The user
+  name is entered along with an initial password. 
   <img src="./ss_homebase4.png"/><br/>
   <b>Figure #5: Add a User</b>
   <p/>
   <p/>
-  As stated above, after systems have been added to a luci server, an additional Manage Systems link appears in the navigation table. The Manage Systems page provides a way to delete systems if desired.
-  <p/>
-  When an admin adds a new user to a luci server, two additional links appear in the Navigation Table: A Delete User link, and a User Permissions link. The Delete User link is self explanatory, and this page lists all users other than the admin, in a dropdown menu. Selecting a user name and then clicking the 'Delete This User' button removes that user account from the luci server.<br/>
-  The User Permissions page is where an admin grants privileges to user accounts. A dropdown menu lists all current users, followed by a list of all systems registered with the luci server. By selecting a user from the dropdown, the context is set for the page, and then those systems that the admin wishes to allow the user to administer are checked. Finally, the 'Update Permissions' button is clicked to persist the privileges. By default, when a new user is created, they have no privileges on any system.
+  As stated above, after systems have been added to a luci server, an additional
+  Manage Systems link appears in the navigation table. The Manage Systems page
+  provides a way to delete systems if desired. 
+  <p/>
+  When an administrator adds a new user to a luci server, two additional links appear in
+  the Navigation Table: A Delete User link, and a User Permissions link. The
+  Delete User link is self explanatory, and this page lists all users other than
+  the admin, in a dropdown menu. Selecting a user name and then clicking the
+  Delete This User button removes that user account from the luci server.<br/> 
+  The User Permissions page is where an administrator grants privileges to user
+  accounts. A dropdown menu lists all current users, followed by a list of all
+  systems registered with the luci server. By selecting a user from the
+  dropdown, the context is set for the page, and then those systems that the
+  admin wishes to allow the user to administer are checked. Finally, the Update
+  Permissions button is clicked to persist the privileges. By default, a user
+  has no user permissions upon creation. 
   <img src="./ss_homebase5.png"/><br/>
   <b>Figure #6: User Permissions Page</b>
   <p/>
@@ -90,105 +186,235 @@
   
   <h2>Cluster Tab</h2>
   When the cluster tab is selected, luci first checks the identity of the user and 
-compiles a list of clusters that the current user is privileged to administer.
-If the current user is not privileged to access any of the clusters registered on the luci server, they are informed accordingly. If the current user is the admin, then all clusters are accessible.
-  <p/>
-  After selecting the Cluster tab, a page is display that offers a summary 
-list of all registered clusters on the luci server that are accessible by the current user. Each cluster is identified by name, and the name is a link to the properties page for that specific cluster. In addition, the health of the cluster can be quickly assessed - green indicates good health, and red indicates a problem.
+compiles a list of clusters that the current user is permitted to administer.
+If the current user is not permitted to access any of the clusters registered
+  on the luci server, they are informed accordingly. If the current user is the
+  admin, then all clusters are accessible. 
+  <p/>
+  Selecting the Cluster tab causes a page to be displayed that lists all
+  registered clusters on the luci server that are accessible by the 
+  current user. Each cluster is identified by name and presents a link to
+  the properties page for that cluster. In addition, the health of the
+  cluster can be quickly assessed - green indicates good health, and red
+  indicates a problem. 
  <p/>
- The nodes of the cluster are also listed, and their health can be spotted depending on their font color. Green means healthy and part of the cluster; red means not part of the cluster, and gray means that the node is not responding and in an unknown state.
+ The nodes of the cluster are also listed, with health indicated by font
+  color. Green means healthy and part of the cluster; 
+  red means not part of the cluster, and gray means that the node is not
+  responding and in an unknown state. 
   <br/>
-  The cluster list page offers some additional summary information about each cluster. Whether not the cluster is quorate is specified, as is total cluster votes. A dropdown menu allows a cluster to be started, stopped, or restarted. Finally, services for the cluster are listed as links, annd again, their health is identified by their font color.
-  <p/>
-  On the left hand side of every cluster tab page is a navigation table with three links: Cluster List, Create, and Configure. The default page is the Cluster List page. The Create page is for creating a new cluster. Selecting the Configure link displays a short list of clusters in tthe navigation table. Choosing a cluster name takes the user to the properties page for that cluster (the cluster name link on the Cluster List page performs the same action).
+  The Cluster List page offers some additional summary information about each
+  cluster. It displays quorum status and the total cluster votes.  A dropdown
+  menu allows a cluster to be started, stopped, or
+  restarted. Finally, services for the cluster are listed as links, with their
+  health indicated by font color. 
+  <p/>
+  On the left side of every cluster tab page is a navigation table with
+  three links: Cluster List, Create, and Configure. The default page is the
+  Cluster List page. The Create page is for creating a new cluster. Selecting
+  the Configure link displays a short list of clusters in the navigation
+  table. Choosing a cluster name takes the user to the properties page for that
+  cluster (the cluster name link on the Cluster List page performs the same
+  action). 
   <img src="./clus1.png"/><br/>
   <b>Figure #7: Cluster List Page</b>
   <p/>
   <p/>
-After a cluster has been selected via the main cluster tab nav table or by clicking the link tthat is the name of a cluster on the cluster list page, the Cluster tab has a context associated with it, and another navigation table with the name of the selected cluster in the top title spot, is displayed beneath the main navigation table, which offers links to the 5 configuration categories for clusters. 
- NOTE: Until a specific cluster is selected, the cluster pages have no specific cluster context associated with them. Once a cluster has been selected, however, the links and options available on the lower cluster navigation table pertains to the selected cluster. As the upper cluster navigation table is always availabble, the cluster context can be changed at any time by selecting a different cluster from the list available under the cluster configure options in the main navigation table, or by returning to the top level Cluster List page and selecting a the link that is the name of the desired cluster (The cluster list page can be easily returned to in one of three ways: by clicking on the Cluster tab, selecting the 'Cluster List' link in the main navigation table, or selecting the 'Configure' link from the main navigation table).
-The configuration categories available in the lower cluster-specific navigation table are as follows:
+You can select a cluster via the main cluster tab navigation table or by
+  clicking the link that is the name of a cluster on the cluster list
+  page. Selecting a cluster associates context with the Cluster
+  tab for the cluster selected. Selecting a cluster causes a cluster-specific
+  navigation table to be displayed below the clusters table (on the left side of
+  the page). The cluster specific table identifies the cluster name at the top
+  of the table and presents links to the five configuration
+  categories for clusters.</br>  
+ NOTE: Until a specific cluster is selected, the cluster pages have no specific
+  cluster context associated with them. Once a cluster has been selected,
+  however, the links and options available on the lower cluster navigation table
+  pertains to the selected cluster. As the upper cluster navigation table is
+  always available, the cluster context can be changed at any time by selecting
+  a different cluster from the list available under the cluster configure
+  options in the main navigation table, or by returning to the top level Cluster
+  List page and selecting a the link that is the name of the desired cluster.
+  (You can easily return to the cluster list page in one of three ways: by
+  clicking on the Cluster tab, selecting the Cluster List link in the main
+  navigation table, or selecting the Configure link from the main navigation
+  table.) 
+The configuration categories available in the lower cluster-specific navigation
+  table are as follows: 
 <ul><li>Nodes</li>
 <li>Services</li>
 <li>Resources</li>
 <li>Failover Domains</li>
 <li>Shared Fence Devices</li>
 </ul>
-Selecting any of these primary configuration links offers a similar set of options for each configuration category, by doing the following:
-<ul><li>A list is presented of the corresponding configurable cluster elements. For example, if nodes is selected, a list of all nodes in the cluster is displayed with general node tasks and quick links to node-related configuration pages. The following figure shows a typical node list. Note that this is a high level view of each node, and is useful for quickly assessing the health of the node and checking which cluster services are currently deployed on a node.</li>
-<li>A sub-menu is offered for each configuration category. Options in this submenu are:<ul><li>Create or Add</li><li>Configure; which also displays a list of the individual configuration elements which are direct links to the detailed configuration page</li></ul>
-In summary, after a cluster has been selected, the general cluster properties page is displayed, and a new nav table is rendered with links for each of the five cluster confiuration categories. Selecting a category link displays list of those elements with a high level diagnostic view and links to more detailed aspects of the elements, a link to create a new element, and a sub-menu list of direct links to the detailed configuration properties page for each element currently configured.
-  This 'drill-down' pattern, wherein a top level list of elements is displayed with links to properties pages for each element, paired with a way to create a new element, is repeated throughout the luci Cluster UI.
+Selecting any of these primary configuration links offers a similar set of
+  options for each configuration category, by doing the following: 
+<ul><li>A list is presented of the corresponding configurable cluster
+  elements. For example, if Nodes is selected, a list of all nodes in the
+  cluster is displayed with general node tasks and quick links to node-related
+  configuration pages. The following figure shows a typical node list. Note that
+  this is a high-level view of each node, and is useful for quickly assessing
+  the health of the node and checking which cluster services are currently
+  deployed on a node.</li> 
+<li>A sub-menu is offered for each configuration category. Options in this
+  submenu are:<ul><li>Create or Add</li><li>Configure; which also displays a
+  list of the individual configuration elements which are direct links to the
+  detailed configuration page</li></ul> <br/>
+In summary, after a cluster has been selected, the general cluster properties
+  page is displayed, and a new navigation table is rendered with links for each of the
+  five cluster configuration categories. Selecting a category link displays list
+  of those elements with a high level diagnostic view and links to more detailed
+  aspects of the elements, a link to create a new element, and a sub-menu list
+  of direct links to the detailed configuration properties page for each element
+  currently configured. 
+  This "drill-down" pattern, wherein a top level list of elements is displayed
+  with links to properties pages for each element, paired with a way to create a
+  new element, is repeated throughout the luci Cluster UI. 
   <img src="./clus2.png"/><br/>
   <b>Figure #8:  Cluster Properties Page - Note name of cluster at the top of the page, and in the Title section of the lower navigation table</b>
   <p/>
   <p/>
   <h4>Nodes</h4>
-  Selecting 'Nodes' from the lower Navigation Table displays a list of nodes in the current cluster, along with some helpful links to services running on that node, fencing for the node, and even a link that displays recent log activity for the node in a new browser window. A dropdown menu allows administrators of the cluster a way to have the node join or leave the cluster. The node can also be fenced, rebooted, or deleted through the options in the dropdown menu.
+  Selecting Nodes from the lower Navigation Table displays a list of nodes in
+  the current cluster, along with some helpful links to services running on that
+  node, fencing for the node, and even a link that displays recent log activity
+  for the node in a new browser window. A dropdown menu allows administrators of
+  the cluster a way to have the node join or leave the cluster. The node can
+  also be fenced, rebooted, or deleted through the options in the dropdown
+  menu. 
   <img src="./clus3.png"/>
   <b>Figure #9:  Node List Page</b><br/>
   <p/> 
   <p/> 
-The name of the node is a link to the detailed configuration page for that node, and the color of the font (green or red) reflects a course status check on the health of the node.
-<p/>
-When the Nodes link is chosen in the lower navigation table, the 'Add a Node' and Configure options become visible. The Configure option link has a list of the nodes beneath it, and selecting one of these links is a direct path to the detailed properties page for the node, in the same way that the node name link is on the node list page.
+The name of the node is a link to the detailed configuration page for that node,
+  and the color of the font (green or red) reflects a course status check on the
+  health of the node. 
+<p/> 
+When the Nodes link is chosen in the lower navigation table, the Add a Node
+  and Configure options become visible. The Configure option link has a list of
+  the nodes beneath it, and selecting one of these links is a direct path to the
+  detailed properties page for the node, in the same way that the node name link
+  is on the node list page. 
 <h4>Add a Node</h4>
 Below is a screenshot of the Add a Node page:
   <img src="./clus5.png"/><br/>
   <b>Figure #10:  Add a Node Page</b>
   <p/> 
   <p/> 
-  The Add a Node page is similar in look and functionality to the Add a System page available in the Homebase tab. The system hostname of IP Address is entered in the appropriate field along with the password for the system. Multiple nodes may be added at once. When the submit button is clicked, the following takes place:
+  The Add a Node page is similar in look and function to the Add a System
+  page available in the Homebase tab. The system hostname of IP Address is
+  entered in the appropriate field along with the password for the
+  system. Multiple nodes may be added at once. When the submit button is
+  clicked, the following takes place: 
 <ul>
- <li>Contact is made with each future nodes ricci agent. Is this contact fails on any listed hostname, the operation is suspended and the user is offered the chance to re-enter the password.</li>
-<li>After authentication is made on all listed nodes, the proper cluster suite RPMs for that nodes architecture are pulled down and installed.</li>
+ <li>Contact is made with each future nodes ricci agent. If this contact fails
+  on any listed hostname, the operation is suspended and the user is offered the
+  chance to re-enter the password.</li> 
+<li>After authentication is made on all listed nodes, the proper cluster suite
+  RPMs for that node's architecture are pulled down and installed.</li> 
 <li>After installation, an initial cluster.conf file is propagated to each node.</li>
-<li>Finally, each future node is rebooted. When the node comes back up, it should join the cluster without error.</li>
-</ul>
-Note: Until the node to be added has completed he installation and cluster join operation, any attempts to navigate to the configuration page for that node will result in a 'busy signal' graphic that informs the user of what modification is occurring and to try back later when the operation is complete.
+<li>Finally, each future node is rebooted. When the node comes back up, it
+  should join the cluster without error.</li> 
+</ul><br/>
+NOTE: Until the node to be added has completed he installation and cluster join
+  operation, any attempts to navigate to the configuration page for that node
+  will result in a "busy signal" graphic that informs the user of what
+  modification is occurring and to try back later when the operation is
+  complete. 
 <p/>
 <h4>Node Configuration Page</h4>
-Selecting the name link in the node list page, or selecting a nodename in the list below the node Configure link in the lower navigation table takes the user directly to the Node Configuration page. Here is an image of a typical node configuration page:
+Selecting the name link in the node list page, or selecting a nodename in the
+  list below the node Configure link in the lower navigation table takes the
+  user directly to the Node Configuration page. Here is an image of a typical
+  node configuration page: 
   <img src="./clus4.png"/>
   <b>Figure #11:  Node Configuration Page</b><br/>
   <p/> 
   <p/> 
 This page is divided into 5 sections.
 <ul>
-  <li>General Node Tasks - The first section on the node configuration page shows general node health and offers a link to view recent log activity on the node in a pop-up browser window, and also ofers a dropdown menu of some common tasks to perform on a node. These tasks are:
+  <li>General Node Tasks - The first section on the node configuration page
+  shows general node health and offers a link to view recent log activity on the
+  node in a pop-up browser window, and also offers a dropdown menu of some common
+  tasks to perform on a node. These tasks are: 
   <ul>
-    <li>Have node join/leave cluster - depending on he node status, one of these options is offered.</li>
+    <li>Have node join/leave cluster - depending on he node status, one of these
+  options is offered.</li> 
     <li>Fence Node - The node is fenced by the configured means.</li>
     <li>Reboot Node</li>
-    <li>Delete Node - when a node is deleted, it is made to leave the cluster, all cluster services are stopped on the node, its cluster.conf file is deleted, and a new cluster.conf file is propagated to the remaining nodes in the cluster with the deleted node removed from the configration. Note, please, that deleting a node does not remove the installed cluster packages from the node.</li>
+    <li>Delete Node - when a node is deleted, it is made to leave the cluster,
+  all cluster services are stopped on the node, its cluster.conf file is
+  deleted, and a new cluster.conf file is propagated to the remaining nodes in
+  the cluster with the deleted node removed from the configuration. Note
+  that deleting a node does not remove the installed cluster packages from the
+  node.</li> 
   </ul>
- </li>
- <li>The next section of the node configuration page is a table showing the status of cluster daemons. In the screenshot above, 4 cluster daemons are listed. This is for a RHEL4 cluster. In the RHEL 5 cluster suite, only two daemons are listed.
+ </li><br/>
+ <li>The next section of the node configuration page is a table showing the
+  status of cluster daemons. In the screenshot above, four cluster daemons are
+  listed. This is for a RHEL 4 cluster. In the RHEL 5 cluster suite, only two
+  daemons are listed. 
  <p/>
- Each daemon can be separately started or stopped, and its chkconfig status amended to allow the daemon to be enabled at system startup or not.</li>
-  <li>All services running on the node are listed along with their status in the 'Services on this Node' section. Links are offered to each services configuration page.</li>
-  <li>The next section of the node configuration page is a display of Failover Domain Membership. Links are offered to the configuration page for each failover domain that the node has membership in.</li>
-  <li>Finally, the node configuration pages' final section is for fence configuration. Two levels of fencing may be configured: A Main fencing method, and a Backup method. The cluster suite attempts to fence the node, if necessary, with the main fencing method first. If this fails, the backup method is employed.
-  <p/>
-  Each of the two fence levels or methods may employ multiple fence types within them; for example, when power switch fencing is used to fence a node with dual redundant power supplies.</li>
+ Each daemon can be separately started or stopped, and its chkconfig status
+  amended to allow the daemon to be enabled at system startup or not.</li> 
+  <li>All services running on the node are listed along with their status in the
+  "Services on this Node" section. Links are offered to each services
+  configuration page.</li> <br/>
+  <li>The next section of the node configuration page is a display of Failover
+  Domain Membership. Links are offered to the configuration page for each
+  failover domain that the node has membership in.</li> <br/>
+  <li>Finally, the node configuration page's final section is for fence
+  configuration. Two levels of fencing may be configured: A Main fencing method,
+  and a Backup method. The cluster suite attempts to fence the node, if
+  necessary, with the main fencing method first. If this fails, the backup
+  method is employed. 
+  <p/>
+  Each of the two fence levels or methods may employ multiple fence types within
+  them; for example, when power switch fencing is used to fence a node with dual
+  redundant power supplies.</li> 
   </ul>
   <p/>
   <p/>
 
-  <h2>Storage Tab</h2>
-This tab allows the user to monitor and configure storage on remote systems. Means for configuring disk partitions, logical volumes (clustered as well as single system use), and file system parameters and mount points. The storage tab is useful for setting up shared storage for clusters and offers GFS and GFS2 (depending on OS version) as a file system option. <p/>
-When a user selects the storage tab, the main storage page shows a list of systems available to the logged in user in a navigation table to the left. A small form allows the user to choose a storage unit size that the user wold generall prefer to work in. This choice is persisted for the user and can be changed at any time by returning to this page. In addition, the unit type can be changed on specific configuration forms throughout the storage UI - this general choice allows an admin to avoid difficult decimal representations of storage size if they know that most of their storage is measured, for example, in gigabytes, or terabytes, or what have you. <p/>
-A dropdown menu also allows the user to choose if they would rather have devices displayed by path or scsi ID.<p/>
- Finally, this main storage page lists systems that the user is authorized to acccess, but currently unable to administer due to a problem such as a system is unreachable via the network, or the system has been re-imaged and the luci server admin must re-authenticate with the ricci agent on the system. A reason for the trouble is displayed if it can be determined.<p/> 
-Only those systems that the user is privileged to administer is shown in the tabs main navigation table. If he user has no privileges on any systems, an appropriate message is displayed.
+  <h2>Storage Tab</h2> 
+This tab allows the user to monitor and configure storage
+on remote systems. It provides a means for configuring disk partitions, logical
+volumes (clustered as well as single system use), and file system parameters and
+mount points. The storage tab is useful for setting up shared storage for
+clusters and offers GFS and GFS2 (depending on OS version) as a file system
+option. <p/>
+
+When a user selects the storage tab, the main storage page shows a list of
+systems available to the logged-in user in a navigation table to the left. A
+small form allows the user to choose a storage unit size that the user would
+generally prefer to work in. This choice is persisted for the user and can be
+changed at any time by returning to this page. In addition, the unit type can be
+changed on specific configuration forms throughout the storage UI. This general
+choice allows an administrator to avoid difficult decimal representations of storage
+size (for example, if they know that most of their storage is measured in
+gigabytes, terabytes, or other more familiar representations). <p/> 
+
+A dropdown menu also allows the user to choose if they would rather have devices
+displayed by path or SCSI ID.<p/> 
+ Finally, this main storage page lists systems that the user is authorized to
+ access, but currently unable to administer due to a problem such as a system
+ is unreachable via the network, or the system has been re-imaged and the luci
+ server admin must re-authenticate with the ricci agent on the system. A reason
+ for the trouble is displayed if it can be determined.<p/>  
+Only those systems that the user is privileged to administer is shown in the
+tabs main navigation table. If the user has no permissions on any systems, an
+appropriate message is displayed. 
  <h4>General System Page</h4>
- After a system is selected to administer, a general properties page is displayed for the system. This page view is divided into three sections:
+ After a system is selected to administer, a general properties page is
+ displayed for the system. This page view is divided into three sections: 
   <ul>
    <li>Hard Drives</li>
    <li>Partitions</li>
    <li>Volume Groups</li>
   </ul>
-  Each of these sections is set up as an expandable tree, with direct links provided to property sheets for specific devices, partitions, etc.
+  Each of these sections is set up as an expandable tree, with direct links
+  provided to property sheets for specific devices, partitions, etc. 
  </body>
 </html>
 




More information about the Cluster-devel mailing list