[Cluster-devel] conga/luci/docs user_manual.html

jparsons at sourceware.org jparsons at sourceware.org
Mon Oct 9 05:47:02 UTC 2006


CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	jparsons at sourceware.org	2006-10-09 05:47:02

Modified files:
	luci/docs      : user_manual.html 

Log message:
	This is about done now - maybe a couple of storage screen shots would help.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/user_manual.html.diff?cvsroot=cluster&r1=1.4&r2=1.5

--- conga/luci/docs/user_manual.html	2006/09/26 13:35:57	1.4
+++ conga/luci/docs/user_manual.html	2006/10/09 05:47:02	1.5
@@ -8,7 +8,7 @@
   Conga is an agent/server architecture for remote administration of systems. The agent component is called 'ricci', and the server is called luci. One luci server can communicate with many multiple ricci agents installed on systems.
   When a system is added to a luci server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA, but in fact, CA integration is not complete in version #1 of conga). Through the UI provided by luci, users can configure and administer storage and cluster behavior on remote systems. Communication between luci and ricci is done via XML.
    <h3>Luci Description</h3>
-    As stated above, systems to be administered are 'added' to a luci server (in the documentation that follows, the term 'registered' is also used to mean that a system has been added to a luci server to administered remotely). This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier testing path.
+    As stated above, systems to be administered are 'added' to a luci server (in the documentation that follows, the term 'registered' is also used to mean that a system has been added to a luci server to administered remotely). This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier upgrade and testing path.
   <p/>
   Every luci server instance has one user at initial installation time. This user is called 'admin'. Only the admin user may add systems to a luci server. The admin user can also create additional user accounts and determine which users are allowed to access which systems in the luci server database. It is possible to import users as a batch operation in a new luci server, just as it is possible to import systems.
     <h4>Installation of Luci</h4> 
@@ -27,7 +27,7 @@
   <img src="./ss_login1.png"/>
   <p/>
   Enter admin as the user name, and then enter the admin password that has been set up in the appropriate field, then click 'log in'.
-    <h4>Organization</h4>
+    <h4>UI Organization</h4>
     luci is set up with three tabs right now. They are:
     <ul><li>Homebase: This is where admin tools for adding and deleting systems or users are located. Only admin is allowed access to this tab.</li>
         <li>Cluster: If any clusters are set up with the luci server, they will show up in a list in this tab. If a user other than admin navigates to the cluster tab, only those clusters that the user has permission to manage show up in the cluster list. The cluster tab provides a means for creating and configuring clusters.</li>
@@ -38,46 +38,91 @@
   <img src="./ss_homebase1.png"/>
   <p/>
   With no systems registered with a luci server, the homebase page provides 3 initial utilities to the admin:
-  <ul><li>Add a system: Adding a single system to luci in this first release makes the system available for remote storage administration. In addition to storage administration, conga also provides remote package retrieval and installation, chkconfig functionality, full remote cluster administration, and module support to filter and retrieve log entries. The storage and cluster UIs use some of this broad functionality, but at this time UI has not been built for all that conga will do remotely. <p/>
+  <ul><li>Add a System</li>
+      <li>Add an Existing Cluster</li>
+      <li>Add a User</li>
+  </ul>
+  After systems have been added to a luci server, the following link become available in the navigation table:
+  <ul><li>Manage Systems</li>
+  </ul>
+  After users have been added to a luci server, the following links become available in the navigation table:
+  <ul>
+      <li>User Permissions</li>
+      <li>Delete User</li>
+  </ul>
+
+  <h4>Add a System:</h4> Adding a single system to luci in this first release makes the system available for remote storage administration. In addition to storage administration, conga also provides remote package retrieval and installation, chkconfig functionality, full remote cluster administration, and module support to filter and retrieve log entries. The storage and cluster UIs use some of this broad functionality, but at this time UI has not been built for all that conga will do remotely. <p/>
   To add a system, click on the 'Add a System' link in the left hand nav table. This will load the following page:
   <img src="./ss_homebase2.png"/>
   The fully qualified domain name  OR IP Address of the system is entered in the System Hostname field. The root passsword for the system is entered in the adjacent field. As a convenience for adding multiple systems at once, and 'Add Another Entry' button is provided. Whhen this button is clicked and at least one additional entry row has been provided, a checkbox is also made available that can be selected if all systems specified for addition to the luci server share the same password.
   <img src="./ss_homebase3.png"/>
   <p/>
-  If the System Hostname is left blank for any row, it is disregarded when the list of systems is submitted for addition. If systems in the list of rows do NOT share the same password (and the checkbox is, of course, left unchecked) and one ior more passwords are incorrect, an error message is generated for each system that has an incorrect password. Those systems listed with correct passwords are added to the luci server. Inn addition to incorrect password problems, an error message is also displayed if luci is unable to connect to the ricci agent on a system. Finally, is a system is entered on the form for addition and it is ALREADY being managed by the luci server, it is not added again - but the admin is informed via error message.</li>
-  <li>Add a Cluster: This page looks much like the Add a System page, only one system may be listed. Any node in the cluster may bbe used for this entry.  Luci will contact tthe specified system and attempt to authenticate with the password provided. If successful, the complete list of cluster nodes will be returned, and a table will be populated with the node names and an adjacent field for a password for each node. The initial node that was entered appears in tthe list with its password field marked as 'authennticated'. There is a convenience checkbox if all nodes share the same password. NOTE: At this point, no cluster nodes have been added to luci - not even the initial node used to retrieve the cluster node list that successfully autthenticated. The cluster and subsequent nodes are only added after the entire list has been submitted with tthe submit button, and all nodes authenticate.  <p/>
+  If the System Hostname is left blank for any row, it is disregarded when the list of systems is submitted for addition. If systems in the list of rows do NOT share the same password (and the checkbox is, of course, left unchecked) and one ior more passwords are incorrect, an error message is generated for each system that has an incorrect password. Those systems listed with correct passwords are added to the luci server. Inn addition to incorrect password problems, an error message is also displayed if luci is unable to connect to the ricci agent on a system. Finally, is a system is entered on the form for addition and it is ALREADY being managed by the luci server, it is not added again - but the admin is informed via error message.<p/>
+  <h4>Add an Existing Cluster:</h4> This page looks much like the Add a System page, only one system may be listed. Any node in the cluster may bbe used for this entry.  Luci will contact tthe specified system and attempt to authenticate with the password provided. If successful, the complete list of cluster nodes will be returned, and a table will be populated with the node names and an adjacent field for a password for each node. The initial node that was entered appears in tthe list with its password field marked as 'authennticated'. There is a convenience checkbox if all nodes share the same password. NOTE: At this point, no cluster nodes have been added to luci - not even the initial node used to retrieve the cluster node list that successfully autthenticated. The cluster and subsequent nodes are only added after the entire list has been submitted with tthe submit button, and all nodes authenticate.  <p/>
 If any nodes fail to authenticate, they appear in the list in red font, so that the password can be corrected and the node list submittted aggain. Luci hhas a strict policy about addinng a cluster to be managed: A cluster cannot be added unless ALL nodes can be reached and authenticated.
-  <p/>When a cluster is added to a luci server, all nodes are also added as general systems so that storage may be managed on them. If this is not desired, the individual systems may be removed fromm luci, while remote cluster management capability is maintained.</li>
-  <li>Add a User: Here the admin may add additional user accounts. The user name is entered along with an initial password.
+  <p/>When a cluster is added to a luci server, all nodes are also added as general systems so that storage may be managed on them. If this is not desired, the individual systems may be removed fromm luci, while remote cluster management capability is maintained.<p/>
+  Note: If an admin desires to create a new cluster, this capability is available on the Cluster tab. This task link is only for adding and managing clusters that already exist.<p/>
+  <h4>Add a User: </h4>Here the admin may add additional user accounts. The user name is entered along with an initial password.
   <img src="./ss_homebase4.png"/>
-  </li>
-  </ul>
-  After systems have been added to a luci server, an additional Manage Systems link appears in the navigation table. The Manage Systems page provides a way to delete systems if desired.
-  <br/>
+  <p/>
+  As stated above, after systems have been added to a luci server, an additional Manage Systems link appears in the navigation table. The Manage Systems page provides a way to delete systems if desired.
+  <p/>
   When an admin adds a new user to a luci server, two additional links appear in the Navigation Table: A Delete User link, and a User Permissions link. The Delete User link is self explanatory, and this page lists all users other than the admin, in a dropdown menu. Selecting a user name and then clicking the 'Delete This User' button removes that user account from the luci server.<br/>
   The User Permissions page is where an admin grants privileges to user accounts. A dropdown menu lists all current users, followed by a list of all systems registered with the luci server. By selecting a user from the dropdown, the context is set for the page, and then those systems that the admin wishes to allow the user to administer are checked. Finally, the 'Update Permissions' button is clicked to persist the privileges. By default, when a new user is created, they have no privileges on any system.
   <img src="./ss_homebase5.png"/>
   <p/>
   
   <h2>Cluster Tab</h2>
-  When the cluster tab is selected, luci first checks the identity of the user and compiles a list of cluster that the current user is privileged to administer.
+  When the cluster tab is selected, luci first checks the identity of the user and 
+compiles a list of clusters that the current user is privileged to administer.
 If the current user is not privileged to access any of the clusters registered on the luci server, they are informed accordingly. If the current user is the admin, then all clusters are accessible.
-  <br/>
-  Selecting the cluster tab displays a list of the clusters accessible by the current user. Each cluster is identified by name, and the name is a link to the properties page for that specific cluster. In addition, the health of the cluster can be quickly assessed - green indicates health, and red indicates a problem.
+  <p/>
+  After selecting the Cluster tab, a page is display that offers a summary 
+list of all registered clusters on the luci server that are accessible by the current user. Each cluster is identified by name, and the name is a link to the properties page for that specific cluster. In addition, the health of the cluster can be quickly assessed - green indicates good health, and red indicates a problem.
+ <p/>
  The nodes of the cluster are also listed, and their health can be spotted depending on their font color. Green means healthy and part of the cluster; red means not part of the cluster, and gray means that the node is not responding and in an unknown state.
   <br/>
   The cluster list page offers some additional summary information about each cluster. Whether not the cluster is quorate is specified, as is total cluster votes. A dropdown menu allows a cluster to be started, stopped, or restarted. Finally, services for the cluster are listed as links, annd again, their health is identified by their font color.
-  <br/>
+  <p/>
   On the left hand side of every cluster tab page is a navigation table with three links: Cluster List, Create, and Configure. The default page is the Cluster List page. The Create page is for creating a new cluster. Selecting the Configure link displays a short list of clusters in tthe navigation table. Choosing a cluster name takes the user to the properties page for that cluster (the cluster name link on the Cluster List page performs the same action).
   <img src="./clus1.png"/>
- NOTE: Until a specific cluster is selected, the cluster pages have no context associated with them. Once a cluster has been selected, however, an additional navigation table is displayed with links to nodes, services, fence devices, and failover domains.
+After a cluster has been selected via the main cluster tab nav table or by clicking the link tthat is the name of a cluster on the cluster list page, the Cluster tab has a context associated with it, and another navigation table with the name of the selected cluster in the top title spot, is displayed beneath the main navigation table, which offers links to the 5 configuration categories for clusters. 
+ NOTE: Until a specific cluster is selected, the cluster pages have no specific cluster context associated with them. Once a cluster has been selected, however, the links and options available on the lower cluster navigation table pertains to the selected cluster. As the upper cluster navigation table is always availabble, the cluster context can be changed at any time by selecting a different cluster from the list available under the cluster configure options in the main navigation table, or by returning to the top level Cluster List page and selecting a the link that is the name of the desired cluster (The cluster list page can be easily returned to in one of three ways: by clicking on the Cluster tab, selecting the 'Cluster List' link in the main navigation table, or selecting the 'Configure' link from the main navigation table).
+The configuration categories available in the lower cluster-specific navigation table are as follows:
+<ul><li>Nodes</li>
+<li>Services</li>
+<li>Resources</li>
+<li>Failover Domains</li>
+<li>Shared Fence Devices</li>
+</ul>
+Selecting any of these primary configuration links offers a similar set of options for each configuration category, by doing the following:
+<ul><li>A list is presented of the corresponding configurable cluster elements. For example, if nodes is selected, a list of all nodes in the cluster is displayed with general node tasks and quick links to node-related configuration pages. The following figure shows a typical node list. Note that this is a high level view of each node, and is useful for quickly assessing the health of the node and checking which cluster services are currently deployed on a node.</li>
+<li>A sub-menu is offered for each configuration category. Options in this submenu are:<ul><li>Create or Add</li><li>Configure; which also displays a list of the individual configuration elements which are direct links to the detailed configuration page</li></ul>
+In summary, after a cluster has been selected, the general cluster properties page is displayed, and a new nav table is rendered with links for each of the five cluster confiuration categories. Selecting a category link displays list of those elements with a high level diagnostic view and links to more detailed aspects of the elements, a link to create a new element, and a sub-menu list of direct links to the detailed configuration properties page for each element currently configured.
+  This 'drill-down' pattern, wherein a top level list of elements is displayed with links to properties pages for each element, paired with a way to create a new element, is repeated throughout the luci Cluster UI.
   <img src="./clus2.png"/>
+  <b>Figure   Cluster Properties Page - Note name of cluster at the top of the page, and in the Title section of the lower navigation table</b>
   <p/>
-  <h4>Node List</h4>
+  <p/>
+  <h4>Nodes</h4>
   Selecting 'Nodes' from the lower Navigation Table displays a list of nodes in the current cluster, along with some helpful links to services running on that node, fencing for the node, and even a link that displays recent log activity for the node in a new browser window. A dropdown menu allows administrators of the cluster a way to have the node join or leave the cluster. The node can also be fenced, rebooted, or deleted through the options in the dropdown menu.
   <img src="./clus3.png"/>
-  
+  <b>Figure   Node List Page</b>
+  <p/> 
   <h2>Storage Tab</h2>
+This tab allows the user to monitor and configure storage on remote systems. Means for configuring disk partitions, logical volumes (clustered as well as single system use), and file system parameters and mount points. The storage tab is useful for setting up shared storage for clusters and offers GFS and GFS2 (depending on OS version) as a file system option. <p/>
+When a user selects the storage tab, the main storage page shows a list of systems available to the logged in user in a navigation table to the left. A small form allows the user to choose a storage unit size that the user wold generall prefer to work in. This choice is persisted for the user and can be changed at any time by returning to this page. In addition, the unit type can be changed on specific configuration forms throughout the storage UI - this general choice allows an admin to avoid difficult decimal representations of storage size if they know that most of their storage is measured, for example, in gigabytes, or terabytes, or what have you. <p/>
+A dropdown menu also allows the user to choose if they would rather have devices displayed by path or scsi ID.<p/>
+ Finally, this main storage page lists systems that the user is authorized to acccess, but currently unable to administer due to a problem such as a system is unreachable via the network, or the system has been re-imaged and the luci server admin must re-authenticate with the ricci agent on the system. A reason for the trouble is displayed if it can be determined.<p/> 
+Only those systems that the user is privileged to administer is shown in the tabs main navigation table. If he user has no privileges on any systems, an appropriate message is displayed.
+ <h4>General System Page</h4>
+ After a system is selected to administer, a general properties page is displayed for the system. This page view is divided into three sections:
+  <ul>
+   <li>Hard Drives</li>
+   <li>Partitions</li>
+   <li>Volume Groups</li>
+  </ul>
+  Each of these sections is set up as an expandable tree, with direct links provided to property sheets for specific devices, partitions, etc.
  </body>
 </html>
 




More information about the Cluster-devel mailing list