[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Cluster-devel] conga/luci cluster/clu_portlet_fetcher cluster ...



CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	EXPERIMENTAL
Changes by:	rmccabe sourceware org	2007-06-07 06:41:06

Modified files:
	luci/cluster   : clu_portlet_fetcher form-macros index_html 
	                 resource-form-macros resource_form_handlers.js 
	luci/homebase  : form-macros portlet_homebase 
	luci/site/luci/Extensions: LuciClusterInfo.py LuciDB.py 
	                           LuciZope.py RicciQueries.py 
	                           cluster_adapters.py 
	                           conga_constants.py 
	luci/site/luci/Extensions/ClusterModel: ModelBuilder.py 

Log message:
	Various fixes and cleanups

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/clu_portlet_fetcher.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=1.2.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.198.2.3&r2=1.198.2.4
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/index_html.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.32&r2=1.32.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/resource-form-macros.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.37.2.2&r2=1.37.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/resource_form_handlers.js.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.34&r2=1.34.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/form-macros.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.56.2.1&r2=1.56.2.2
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/homebase/portlet_homebase.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.8&r2=1.8.4.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciClusterInfo.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1.2.7&r2=1.1.2.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciDB.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1.2.14&r2=1.1.2.15
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciZope.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1.2.8&r2=1.1.2.9
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/RicciQueries.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1.2.6&r2=1.1.2.7
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.255.2.12&r2=1.255.2.13
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.39.2.7&r2=1.39.2.8
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1.2.6&r2=1.1.2.7

--- conga/luci/cluster/clu_portlet_fetcher	2006/09/27 22:24:11	1.2
+++ conga/luci/cluster/clu_portlet_fetcher	2007/06/07 06:41:04	1.2.8.1
@@ -4,39 +4,26 @@
 <body>
 
 <metal:leftcolumn define-macro="left_column">
-<!-- unchecked_clusystems are all clusters...the check_clusters call filters list through user permissions -->
-<span tal:define="global unchecked_clusystems root/luci/systems/cluster/objectItems"/>
-<span tal:define="global clusystems python:here.check_clusters(request,unchecked_clusystems)"/>
-<div tal:omit-tag="" metal:use-macro="here/portlet_cluconfig/macros/cluchooseportlet" />
-<span tal:omit-tag="" tal:define="global hasclustername request/clustername |nothing"/>
-<span tal:omit-tag="" tal:condition="hasclustername">
-<div tal:omit-tag="" metal:use-macro="here/portlet_cluconfig/macros/cluconfigportlet" />
-</span>
+	<tal:comment tal:replace=nothing>
+		unchecked_clusystems are all clusters...
+		the check_clusters call filters list through user permissions
+	</tal:comment>
+
+	<tal:block
+		tal:define="unchecked_clusystems /root/luci/systems/cluster/objectItems">
+		<tal:block
+			tal:define="global clusystems python:here.check_clusters(unchecked_clusystems)" />
+	</tal:block>
+
+	<tal:block
+		metal:use-macro="here/portlet_cluconfig/macros/cluchooseportlet" />
+	<tal:block
+		tal:define="global hasclustername request/clustername | nothing" />
+	<tal:block tal:condition="hasclustername">
+		<tal:block
+			metal:use-macro="here/portlet_cluconfig/macros/cluconfigportlet" />
+	</tal:block>
 </metal:leftcolumn>
 
-<!--
-
-<metal:rightcolumn define-macro="right_column"
-   tal:define="Iterator python:modules['Products.CMFPlone'].IndexIterator;
-               tabindex python:Iterator(pos=20000);"
-   tal:condition="sr">
-
-    <metal:block tal:repeat="slot sr">
-        <tal:dontcrash tal:on-error="python:context.plone_log('Error %s on %s while rendering portlet %s'%(error.type, error.value, slot[0]))"
-                       tal:define="pathexpr python:slot[0];
-                                   usemacro python:slot[1];">
-
-        <tal:block tal:condition="usemacro">
-            <metal:block metal:use-macro="python:path(pathexpr)" />
-        </tal:block>
-
-        <span tal:condition="not: usemacro"
-              tal:replace="structure python:path(pathexpr)" />
-
-        </tal:dontcrash>
-    </metal:block>
-</metal:rightcolumn>
--->
-
 </body>
 </html>
--- conga/luci/cluster/form-macros	2007/05/30 05:54:01	1.198.2.3
+++ conga/luci/cluster/form-macros	2007/06/07 06:41:04	1.198.2.4
@@ -4413,6 +4413,12 @@
 	<tal:block tal:condition="python: type == 'tomcat-5'">
 		<div metal:use-macro="here/resource-form-macros/macros/tomcat-5_macro" />
 	</tal:block>
+	<tal:block tal:condition="python: type == 'SAPInstance'">
+		<div metal:use-macro="here/resource-form-macros/macros/SAPInstance_macro" />
+	</tal:block>
+	<tal:block tal:condition="python: type == 'SAPDatabase'">
+		<div metal:use-macro="here/resource-form-macros/macros/SAPDatabase_macro" />
+	</tal:block>
 </div>
 
 <div metal:define-macro="service-config-head-macro" tal:omit-tag="">
@@ -4430,10 +4436,10 @@
 		global global_resources python: here.getResourcesInfo(modelb, request);
 		global sstat python: here.getClusterStatus(request, ricci_agent);
 		global sinfo python: here.getServiceInfo(sstat, modelb, request);
-		global running sinfo/running | nothing;" />
+		global running sinfo/running | nothing" />
 
 	<tal:block tal:replace="structure python: '<script type='+chr(0x22)+'text/javascript'+chr(0x22)+'>'" />
-		var uuid_list = <tal:block tal:replace="sinfo/uuid_list" />;
+		var uuid_list = <tal:block tal:replace="sinfo/uuid_list | nothing" />;
 		var global_resources = <tal:block tal:replace="python: map(lambda x: str(x['name']), global_resources) or 'null'" />;
 		var active_resources = <tal:block tal:replace="python: map(lambda x: str(x['name']), sinfo['resource_list']) or 'null'" />;
 		var resource_names = <tal:block tal:replace="python: (map(lambda x: str(x['name']), global_resources) + map(lambda x: str(x['name']), sinfo['resource_list'])) or 'null'" />;
--- conga/luci/cluster/index_html	2007/03/01 22:19:10	1.32
+++ conga/luci/cluster/index_html	2007/06/07 06:41:05	1.32.2.1
@@ -26,7 +26,7 @@
       <metal:headslot define-slot="head_slot" />
 	    <tal:block tal:define="
 			global sinfo nothing;
-			global hascluster request/clustername |nothing;
+			global hascluster request/clustername | nothing;
 			global isBusy python: False;
 			global firsttime nothing;
 			global ri_agent nothing;
@@ -39,7 +39,7 @@
 				global isVirtualized resmap/isVirtualized | nothing;
 				global os_version resmap/os | nothing;
 				global isBusy python:here.isClusterBusy(request);
-				global firsttime request/busyfirst |nothing" />
+				global firsttime request/busyfirst | nothing" />
 
 			<tal:block tal:condition="firsttime">
 				<tal:block tal:define="global busywaiting python:True" />
@@ -47,7 +47,7 @@
 					tal:attributes="content isBusy/refreshurl | string:." />
 			</tal:block>
 
-			<tal:block tal:define="global busy isBusy/busy |nothing"/>
+			<tal:block tal:define="global busy isBusy/busy | nothing" />
 
 			<tal:block tal:condition="busy">
 				<tal:block tal:define="global busywaiting python:True" />
--- conga/luci/cluster/resource-form-macros	2007/05/30 22:39:28	1.37.2.2
+++ conga/luci/cluster/resource-form-macros	2007/06/07 06:41:05	1.37.2.3
@@ -145,6 +145,8 @@
 		<div metal:use-macro="here/resource-form-macros/macros/openldap_macro" />
 		<div metal:use-macro="here/resource-form-macros/macros/postgres-8_macro" />
 		<div metal:use-macro="here/resource-form-macros/macros/tomcat-5_macro" />
+		<div metal:use-macro="here/resource-form-macros/macros/SAPInstance_macro" />
+		<div metal:use-macro="here/resource-form-macros/macros/SAPDatabase_macro" />
 	</div>
 </div>
 
@@ -213,6 +215,8 @@
 		<div metal:use-macro="here/resource-form-macros/macros/openldap_macro" />
 		<div metal:use-macro="here/resource-form-macros/macros/postgres-8_macro" />
 		<div metal:use-macro="here/resource-form-macros/macros/tomcat-5_macro" />
+		<div metal:use-macro="here/resource-form-macros/macros/SAPInstance_macro" />
+		<div metal:use-macro="here/resource-form-macros/macros/SAPDatabase_macro" />
 	</div>
 </div>
 
@@ -1170,6 +1174,14 @@
 	</form>
 </div>
 
+<div class="rescfg" name="SAPInstance"
+	tal:attributes="id res/name | nothing" metal:define-macro="SAPInstance_macro">
+</div>
+
+<div class="rescfg" name="SAPDatabase"
+	tal:attributes="id res/name | nothing" metal:define-macro="SAPDatabase_macro">
+</div>
+
 <div class="rescfg" name="MYSQL"
 	tal:attributes="id res/name | nothing" metal:define-macro="mysql_macro">
 	<p class="reshdr">MySQL Configuration</p>
--- conga/luci/cluster/resource_form_handlers.js	2007/03/15 22:08:42	1.34
+++ conga/luci/cluster/resource_form_handlers.js	2007/06/07 06:41:05	1.34.2.1
@@ -233,6 +233,16 @@
 	return (errors);
 }
 
+function validate_sapinstance(form) {
+	var errors = new Array();
+	return (errors);
+}
+
+function validate_sapinstance(form) {
+	var errors = new Array();
+	return (errors);
+}
+
 var required_children = new Array();
 required_children['nfsx'] = [ 'nfsc' ];
 
@@ -256,6 +266,8 @@
 form_validators['openldap'] = validate_openldap;
 form_validators['mysql'] = validate_mysql;
 form_validators['lvm'] = validate_lvm;
+form_validators['SAPInstance'] = validate_sapinstance;
+form_validators['SAPDatabase'] = validate_sapdatabase;
 
 function check_form(form) {
 	var valfn = form_validators[form.type.value];
--- conga/luci/homebase/form-macros	2007/05/18 02:36:59	1.56.2.1
+++ conga/luci/homebase/form-macros	2007/06/07 06:41:05	1.56.2.2
@@ -67,9 +67,6 @@
 		<input name="pagetype" type="hidden"
 			tal:attributes="value request/form/pagetype | request/pagetype | nothing" />
 
-		<input name="absoluteURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['absolute_url']" />
-
 		<div class="hbSubmit" tal:condition="python:userList" id="hbSubmit">
 			<input name="Submit" type="button" value="Delete This User"
 				onClick="validateForm(this.form)" />
@@ -137,9 +134,6 @@
 		<input name="pagetype" type="hidden"
 			tal:attributes="value request/form/pagetype | request/pagetype | nothing" />
 
-		<input name="absoluteURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['absolute_url']" />
-
 		<div class="hbSubmit" id="hbSubmit">
 			<input name="Submit" type="button" value="Submit"
 				onClick="validateForm(this.form)" />
@@ -168,12 +162,12 @@
 
 	<script type="text/javascript" src="/luci/homebase/validate_perm.js">
 	</script>
+
 	<script type="text/javascript">
 		set_page_title('Luci — homebase — Set Luci user permissions');
 	</script>
 
-	<span
-		tal:omit-tag=""
+	<tal:block
 		tal:define="global perms python:here.getUserPerms();
 					global systems python:here.getSystems();
 					global num_clusters python:-1;
@@ -188,25 +182,22 @@
 
 		<h2 class="homebase">User Permissions</h2>
 
-		<input name="absoluteURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['absolute_url']" />
-		<input name="baseURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['base_url']" />
-
 		<input name="pagetype" type="hidden"
 			tal:attributes="value request/form/pagetype | request/pagetype | nothing" />
 
-		<span tal:condition="python:perms" tal:content="string:Select a User" /><br/>
-
-		<select tal:omit-tag="python: not perms" class="homebase" name="userList" onChange="document.location = this.form.baseURL.value + '&user=' + this.form.userList.options[this.form.userList.selectedIndex].text">
-			<tal:block tal:define="userlist python: perms.keys().sort()">
-			<tal:block tal:repeat="user userlist">
-				<option class="homebase"
-					tal:content="python:user"
-					tal:attributes="value python:user;
-									selected python:user == curUser"
-				/>
-			</tal:block>
+		<tal:block tal:condition="python:perms">
+		<span tal:content="string:Select a User" /><br/>
+		
+		<select class="homebase" name="userList"
+			onChange="document.location = '/luci/homebase/?pagetype=3&user=' + this.form.userList.options[this.form.userList.selectedIndex].text">
+			<tal:block tal:define="userlist python: perms">
+				<tal:block tal:repeat="user userlist">
+					<option class="homebase"
+						tal:content="python:user"
+						tal:attributes="value python:user;
+										selected python:user == curUser"
+					/>
+				</tal:block>
 			</tal:block>
 		</select>
 
@@ -231,7 +222,7 @@
 			</div>
 		</tal:block>
 
-		<div tal:omit-tag="" tal:condition="python: systems[1] and len(systems[1]) > 0">
+		<tal:block tal:condition="python: systems[1] and len(systems[1]) > 0">
 			<h3 class="homebase">Storage Systems</h3>
 
 			<div class="hbcheckdiv" tal:repeat="s python: systems[1]">
@@ -245,7 +236,7 @@
 				/>
 				<span class="hbText" tal:omit-tag="" tal:content="python:s"/>
 			</div>
-		</div>
+		</tal:block>
 
 		<input type="hidden" id="numStorage"
 			tal:attributes="value python: num_systems + 1" />
@@ -254,9 +245,12 @@
 			tal:attributes="value python: num_clusters + 1" />
 
 		<div class="hbSubmit" id="hbSubmit">
-			<input type="button" name="Update Permissions" value="Update Permissions"
+			<input type="button" name="Update Permissions"
+				value="Update Permissions"
 				onClick="validateForm(this.form)" />
 		</div>
+
+		</tal:block>
 	</form>
 
 	<div tal:condition="python: blankForm">
@@ -593,9 +587,6 @@
 		<input name="pagetype" type="hidden"
 			tal:attributes="value request/form/pagetype | request/pagetype | nothing" />
 
-		<input name="absoluteURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['absolute_url']" />
-
 		<table id="systemsTable" class="systemsTable" border="0" cellspacing="0"
 			tal:define="
 				new_systems request/SESSION/add_systems | nothing;
@@ -774,9 +765,6 @@
 		<input name="pagetype" type="hidden"
 			tal:attributes="value request/form/pagetype | request/pagetype | nothing" />
 
-		<input name="absoluteURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['absolute_url']" />
-
 		<input name="pass" type="hidden"
 			tal:attributes="value add_cluster/pass | string:0" />
 
@@ -937,8 +925,6 @@
 		<input name="pagetype" type="hidden"
 			tal:attributes="value request/form/pagetype | request/pagetype | nothing" />
 
-		<input name="absoluteURL" type="hidden"
-			tal:attributes="value python:data['children'][data['curIndex']]['absolute_url']" />
 		<h2 class="homebase">Add an Existing Cluster</h2>
 
 		<p class="hbText">Enter one node from the cluster you wish to add to the Luci management interface.</p>
--- conga/luci/homebase/portlet_homebase	2006/11/01 23:04:17	1.8
+++ conga/luci/homebase/portlet_homebase	2007/06/07 06:41:05	1.8.4.1
@@ -13,21 +13,21 @@
 	<dd class="portletItemSingle">
 	<ul class="portletCluConfigTree cluConfigTreeLevel0">
 		<tal:portal repeat="c python:data.get('children',[])">
-			<li tal:condition="not: c/currentItem" class="cluConfigTreeItem visualNoMarker">
-			<div tal:condition="not: c/currentItem">
-				<a class="visualIconPadding"
-					tal:attributes="href c/absolute_url;
-						title c/Description |nothing"
-					tal:content="c/Title|nothing">Title</a>
-			</div>
+			<li tal:condition="not:exists:c/currentItem" class="cluConfigTreeItem visualNoMarker">
+				<div>
+					<a class="visualIconPadding"
+						tal:attributes="href c/absolute_url | nothing;
+							title c/Description | nothing"
+						tal:content="c/Title | nothing">Title</a>
+				</div>
 			</li>
 
-			<li tal:condition="c/currentItem" class="cluConfigTreeCurrentItem visualNoMarker">
-				<div tal:condition="c/currentItem">
+			<li tal:condition="exists:c/currentItem" class="cluConfigTreeCurrentItem visualNoMarker">
+				<div>
 					<a class="visualIconPadding"
-						tal:attributes="href c/absolute_url;
-							title c/Description |nothing"
-						tal:content="c/Title|nothing">Title</a>
+						tal:attributes="href c/absolute_url | nothing;
+							title c/Description | nothing"
+						tal:content="c/Title| nothing">Title</a>
 				</div>
 			</li>
 		</tal:portal>
--- conga/luci/site/luci/Extensions/Attic/LuciClusterInfo.py	2007/05/30 22:06:24	1.1.2.7
+++ conga/luci/site/luci/Extensions/Attic/LuciClusterInfo.py	2007/06/07 06:41:05	1.1.2.8
@@ -13,8 +13,9 @@
 from FenceHandler import FENCE_OPTS
 from LuciSyslog import get_logger
 from LuciDB import resolve_nodename
+from LuciZope import GetReqVars
 
-from conga_constants import CLUNAME, CLUSTER_CONFIG, CLUSTER_DELETE, \
+from conga_constants import CLUSTER_CONFIG, CLUSTER_DELETE, \
 	CLUSTER_PROCESS, CLUSTER_RESTART, CLUSTER_START, CLUSTER_STOP, \
 	FDOM, FDOM_CONFIG, FENCEDEV, NODE, NODE_ACTIVE, \
 	NODE_ACTIVE_STR, NODE_DELETE, NODE_FENCE, NODE_INACTIVE, \
@@ -23,7 +24,8 @@
 	PROP_FENCE_TAB, PROP_GENERAL_TAB, PROP_GULM_TAB, PROP_MCAST_TAB, \
 	PROP_QDISK_TAB, RESOURCE, RESOURCE_CONFIG, RESOURCE_REMOVE, \
 	SERVICE, SERVICE_DELETE, SERVICE_MIGRATE, SERVICE_RESTART, \
-	SERVICE_START, SERVICE_STOP, VM_CONFIG, LUCI_DEBUG_MODE
+	SERVICE_START, SERVICE_STOP, VM_CONFIG, LUCI_DEBUG_MODE, \
+	LUCI_CLUSTER_BASE_URL
 
 luci_log = get_logger()
 
@@ -207,27 +209,15 @@
 def getServicesInfo(self, status, model, req):
 	svc_map = {}
 	maplist = list()
+	fvars = GetReqVars(req, [ 'clustername', 'URL' ])
 
-	try:
-		baseurl = req['URL']
-		if not baseurl:
-			raise KeyError, 'is blank'
-	except:
-		baseurl = '/luci/cluster/index_html'
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
 
-	try:
-		nodes = model.getNodes()
-		cluname = req['clustername']
-		if not cluname:
-			raise KeyError, 'is blank'
-	except:
-		try:
-			cluname = req.form['clustername']
-			if not cluname:
-				raise KeyError, 'is blank'
-		except:
-			cluname = '[error retrieving cluster name]'
+	cluname = fvars['clustername']
+	if cluname is None:
+		cluname = model.getClusterName()
 
+	nodes = model.getNodes()
 	for item in status:
 		if item['type'] == 'service':
 			itemmap = {}
@@ -329,44 +319,38 @@
 
 
 def getServiceInfo(self, status, model, req):
-	#set up struct for service config page
-	hmap = {}
 	root_uuid = 'toplevel'
 
-	try:
-		baseurl = req['URL']
-		if not baseurl:
-			raise KeyError, 'is blank'
-	except:
-		baseurl = '/luci/cluster/index_html'
+	fvars = GetReqVars(req, [ 'clustername', 'servicename', 'URL' ])
+
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+	if not model:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('getServiceInfo0: no model: %r' % model)
+		return {}
+
+	#set up struct for service config page
+	hmap = {}
 
 	try:
+		cluname = fvars['clustername'] or model.getClusterName()
 		hmap['fdoms'] = get_fdom_names(model)
-	except:
+	except Exception, e:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('getServiceInfo1: %r %s' % (e, str(e)))
 		hmap['fdoms'] = list()
 
-	try:
-		cluname = req['clustername']
-		if not cluname:
-			raise KeyError, 'is blank'
-	except KeyError, e:
-		try:
-			cluname = req.form['clustername']
-			if not cluname:
-				raise
-		except:
-			cluname = '[error retrieving cluster name]'
-
 	hmap['root_uuid'] = root_uuid
 	# uuids for the service page needed when new resources are created
 	hmap['uuid_list'] = map(lambda x: make_uuid('resource'), xrange(30))
 
-	try:
-		servicename = req['servicename']
-	except KeyError, e:
-		hmap['resource_list'] = {}
+	servicename = fvars['servicename']
+	if servicename is None:
 		return hmap
 
+	if len(status) > 0:
+		nodenames = model.getNodeNames()
+
 	for item in status:
 		innermap = {}
 		if item['type'] == 'service':
@@ -384,19 +368,17 @@
 					innermap['restarturl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_RESTART)
 					innermap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_DELETE)
 
-					#In this case, determine where it can run...
-					nodes = model.getNodes()
-					for node in nodes:
-						if node.getName() != nodename:
+					# In this case, determine where it can run...
+					for node in nodenames:
+						if node != nodename:
 							starturl = {}
-							cur_nodename = node.getName()
-							starturl['nodename'] = cur_nodename
-							starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, cur_nodename)
+							starturl['nodename'] = node 
+							starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, node)
 							starturls.append(starturl)
 
 							if item.has_key('is_vm') and item['is_vm'] is True:
-								migrate_url = { 'nodename': cur_nodename }
-								migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, cur_nodename)
+								migrate_url = { 'nodename': node }
+								migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, node)
 								migrate_url['migrate'] = True
 								starturls.append(migrate_url)
 					innermap['links'] = starturls
@@ -406,26 +388,25 @@
 					innermap['enableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_START)
 					innermap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_DELETE)
 
-					nodes = model.getNodes()
 					starturls = list()
-					for node in nodes:
+					for node in nodenames:
 						starturl = {}
-						cur_nodename = node.getName()
 
-						starturl['nodename'] = cur_nodename
-						starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, cur_nodename)
+						starturl['nodename'] = node
+						starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, node)
 						starturls.append(starturl)
 
 						if item.has_key('is_vm') and item['is_vm'] is True:
-							migrate_url = { 'nodename': cur_nodename }
-							migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, cur_nodename)
+							migrate_url = { 'nodename': node }
+							migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, node)
 							migrate_url['migrate'] = True
 							starturls.append(migrate_url)
 					innermap['links'] = starturls
 				hmap['innermap'] = innermap
 
-	#Now build hashes for resources under service.
-	#first get service by name from model
+	# Now build hashes for resources under service.
+	# first get service by name from model
+
 	svc = model.getService(servicename)
 	try:
 		hmap['domain'] = svc.getAttribute('domain')
@@ -461,13 +442,12 @@
 
 	try:
 		fdom = model.getFailoverDomainByName(request['fdomname'])
+		fhash['name'] = fdom.getName()
 	except Exception, e:
 		if LUCI_DEBUG_MODE is True:
 			luci_log.debug_verbose('getFdomInfo0: %r %s' % (e, str(e)))
 		return fhash
 
-	fhash['name'] = fdom.getName()
-
 	ordered_attr = fdom.getAttribute('ordered')
 	if ordered_attr is not None and (ordered_attr == 'true' or ordered_attr == '1'):
 		fhash['prioritized'] = '1'
@@ -490,22 +470,23 @@
 	return fhash
 
 def getFdomsInfo(self, model, request, clustatus):
+	fvars = GetReqVars(request, [ 'clustername', 'URL' ])
+
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+	clustername = fvars['clustername']
+	if clustername is None:
+		return {}
+
 	slist = list()
 	nlist = list()
-	fdomlist = list()
-
 	for item in clustatus:
 		if item['type'] == 'node':
 			nlist.append(item)
 		elif item['type'] == 'service':
 			slist.append(item)
 
-	clustername = request['clustername']
-	baseurl = request['URL']
-	fdoms = model.getFailoverDomains()
-	svcs = model.getServices()
-
-	for fdom in fdoms:
+	fdomlist = list()
+	for fdom in model.getFailoverDomains():
 		fdom_map = {}
 		fdom_name = fdom.getName()
 		fdom_map['name'] = fdom_name
@@ -524,9 +505,8 @@
 		else:
 			fdom_map['restricted'] = False
 
-		nodes = fdom.getChildren()
 		nodelist = list()
-		for node in nodes:
+		for node in fdom.getChildren():
 			nodesmap = {}
 			ndname = node.getName()
 
@@ -548,7 +528,7 @@
 		fdom_map['nodeslist'] = nodelist
 
 		svclist = list()
-		for svc in svcs:
+		for svc in model.getServices():
 			svcname = svc.getName()
 			for sitem in slist:
 				if sitem['name'] == svcname:
@@ -557,8 +537,7 @@
 						svcmap = {}
 						svcmap['name'] = svcname
 						svcmap['status'] = sitem['running']
-						svcmap['svcurl'] = '%s?pagetype=%s&clustername=%s&servicename=%s' \
-							% (baseurl, SERVICE, clustername, svcname)
+						svcmap['svcurl'] = '%s?pagetype=%s&clustername=%s&servicename=%s' % (baseurl, SERVICE, clustername, svcname)
 						svcmap['location'] = sitem['nodename']
 						svclist.append(svcmap)
 		fdom_map['svclist'] = svclist
@@ -567,21 +546,17 @@
 	return fdomlist
 
 def getClusterInfo(self, model, req):
-	try:
-		cluname = req[CLUNAME]
-	except:
-		try:
-			cluname = req.form['clustername']
-		except:
-			try:
-				cluname = req.form['clustername']
-			except:
-				if LUCI_DEBUG_MODE is True:
-					luci_log.debug_verbose('GCI0: unable to determine cluster name')
-				return {}
+	fvars = GetReqVars(req, [ 'clustername', 'URL' ])
+
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+	cluname = fvars['clustername']
+	if cluname is None:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('GCI0: unable to determine cluster name')
+		return {}
 
 	clumap = {}
-	if model is None:
+	if not model:
 		try:
 			model = getModelForCluster(self, cluname)
 			if not model:
@@ -597,20 +572,22 @@
 			clumap['totem'] = totem.getAttributes()
 
 	prop_baseurl = '%s?pagetype=%s&clustername=%s&' \
-	% (req['URL'], CLUSTER_CONFIG, cluname)
+		% (baseurl, CLUSTER_CONFIG, cluname)
 	basecluster_url = '%stab=%s' % (prop_baseurl, PROP_GENERAL_TAB)
-	#needed:
+	# needed:
 	clumap['basecluster_url'] = basecluster_url
-	#name field
+	# name field
 	clumap['clustername'] = model.getClusterAlias()
-	#config version
+	# config version
 	cp = model.getClusterPtr()
 	clumap['config_version'] = cp.getConfigVersion()
+
+	# xvmd info
+	clumap['fence_xvmd'] = model.hasFenceXVM()
+
 	#-------------
 	#new cluster params - if rhel5
 	#-------------
-
-	clumap['fence_xvmd'] = model.hasFenceXVM()
 	gulm_ptr = model.getGULMPtr()
 	if not gulm_ptr:
 		#Fence Daemon Props
@@ -643,7 +620,8 @@
 		clumap['gulm'] = False
 	else:
 		#-------------
-		#GULM params (rhel4 only)
+		# GULM params (RHEL4 only)
+		#-------------
 		lockserv_list = list()
 		clunodes = model.getNodes()
 		gulm_lockservs = map(lambda x: x.getName(), gulm_ptr.getChildren())
@@ -657,7 +635,8 @@
 		clumap['gulm_lockservers'] = lockserv_list
 
 	#-------------
-	#quorum disk params
+	# quorum disk params
+	#-------------
 	quorumd_url = '%stab=%s' % (prop_baseurl, PROP_QDISK_TAB)
 	clumap['quorumd_url'] = quorumd_url
 	is_quorumd = model.isQuorumd()
@@ -669,9 +648,8 @@
 	clumap['device'] = ''
 	clumap['label'] = ''
 
-	#list struct for heuristics...
+	# list struct for heuristics...
 	hlist = list()
-
 	if is_quorumd:
 		qdp = model.getQuorumdPtr()
 		interval = qdp.getAttribute('interval')
@@ -699,7 +677,6 @@
 			clumap['label'] = label
 
 		heuristic_kids = qdp.getChildren()
-
 		for kid in heuristic_kids:
 			hmap = {}
 			hprog = kid.getAttribute('program')
@@ -760,25 +737,25 @@
 	clu_map['minquorum'] = clu['minQuorum']
 
 	clu_map['clucfg'] = '%s?pagetype=%s&clustername=%s' \
-	% (baseurl, CLUSTER_CONFIG, clustername)
+		% (baseurl, CLUSTER_CONFIG, clustername)
 
 	clu_map['restart_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
-	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_RESTART)
+		% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_RESTART)
 	clu_map['stop_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
-	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_STOP)
+		% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_STOP)
 	clu_map['start_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
-	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_START)
+		% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_START)
 	clu_map['delete_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
-	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_DELETE)
+		% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_DELETE)
 
 	svc_dict_list = list()
 	for svc in svclist:
 		svc_dict = {}
-		svc_dict['nodename'] = svc['nodename']
 		svcname = svc['name']
 		svc_dict['name'] = svcname
-		svc_dict['srunning'] = svc['running']
 		svc_dict['servicename'] = svcname
+		svc_dict['nodename'] = svc['nodename']
+		svc_dict['srunning'] = svc['running']
 
 		if svc.has_key('is_vm') and svc['is_vm'] is True:
 			target_page = VM_CONFIG
@@ -798,7 +775,7 @@
 		name = item['name']
 		nmap['nodename'] = name
 		cfgurl = '%s?pagetype=%s&clustername=%s&nodename=%s' \
-		% (baseurl, NODE, clustername, name)
+			% (baseurl, NODE, clustername, name)
 		nmap['configurl'] = cfgurl
 		if item['clustered'] == 'true':
 			nmap['status'] = NODE_ACTIVE
@@ -815,6 +792,7 @@
 	infohash = {}
 	item = None
 	baseurl = request['URL']
+
 	nodestate = NODE_ACTIVE
 	svclist = list()
 	for thing in status:
@@ -899,13 +877,14 @@
 			infohash['gulm_lockserver'] = model.isNodeLockserver(nodename)
 		except:
 			infohash['gulm_lockserver'] = False
+
 		# next is faildoms
 		fdoms = model.getFailoverDomainsForNode(nodename)
 		for fdom in fdoms:
 			fdom_dict = {}
 			fdom_dict['name'] = fdom.getName()
 			fdomurl = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
-		% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
+				% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
 			fdom_dict['fdomurl'] = fdomurl
 			fdom_dict_list.append(fdom_dict)
 	else:
@@ -917,7 +896,6 @@
 	infohash['d_states'] = None
 
 	nodename_resolved = resolve_nodename(self, clustername, nodename)
-
 	if nodestate == NODE_ACTIVE or nodestate == NODE_INACTIVE:
 	# call service module on node and find out which daemons are running
 		try:
@@ -985,6 +963,7 @@
 		nl_map = {}
 		name = item['name']
 		nl_map['nodename'] = name
+
 		try:
 			nl_map['gulm_lockserver'] = model.isNodeLockserver(name)
 		except:
@@ -1012,9 +991,9 @@
 		nodename_resolved = resolve_nodename(self, clustername, name)
 
 		nl_map['logurl'] = '/luci/logs?nodename=%s&clustername=%s' \
-		% (nodename_resolved, clustername)
+			% (nodename_resolved, clustername)
 
-		#set up URLs for dropdown menu...
+		# set up URLs for dropdown menu...
 		if nl_map['status'] == NODE_ACTIVE:
 			nl_map['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
 				% (baseurl, NODE_PROCESS, NODE_LEAVE_CLUSTER, name, clustername)
@@ -1037,7 +1016,7 @@
 			nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
 				% (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
 
-		#figure out current services running on this node
+		# figure out current services running on this node
 		svc_dict_list = list()
 		for svc in svclist:
 			if svc['nodename'] == name:
@@ -1050,19 +1029,20 @@
 				svc_dict_list.append(svc_dict)
 
 		nl_map['currentservices'] = svc_dict_list
-		#next is faildoms
 
+		# next is faildoms
 		if model:
 			fdoms = model.getFailoverDomainsForNode(name)
 		else:
 			nl_map['ricci_error'] = True
 			fdoms = list()
+
 		fdom_dict_list = list()
 		for fdom in fdoms:
 			fdom_dict = {}
 			fdom_dict['name'] = fdom.getName()
 			fdomurl = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
-		% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
+				% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
 			fdom_dict['fdomurl'] = fdomurl
 			fdom_dict_list.append(fdom_dict)
 
@@ -1079,6 +1059,7 @@
 
 	fence_map = {}
 	fencename = request['fencename']
+
 	fencedevs = model.getFenceDevices()
 	for fencedev in fencedevs:
 		if fencedev.getName().strip() == fencename:
@@ -1090,14 +1071,15 @@
 				fence_map['pretty_name'] = fencedev.getAgentType()
 
 			nodes_used = list()
-			nodes = model.getNodes()
-			for node in nodes:
+			for node in model.getNodes():
 				flevels = node.getFenceLevels()
-				for flevel in flevels: #These are the method blocks...
+				for flevel in flevels:
+					# These are the method blocks...
 					kids = flevel.getChildren()
-					for kid in kids: #These are actual devices in each level
+					for kid in kids:
+						# These are actual devices in each level
 						if kid.getName().strip() == fencedev.getName().strip():
-							#See if this fd already has an entry for this node
+							# See if this fd already has an entry for this node
 							found_duplicate = False
 							for item in nodes_used:
 								if item['nodename'] == node.getName().strip():
@@ -1122,7 +1104,6 @@
 	for fd in fds:
 		if fd.getName().strip() == name:
 			return fd
-
 	raise
 
 def getFenceInfo(self, model, request):
@@ -1194,19 +1175,25 @@
 	if len_levels >= 1:
 		first_level = levels[0]
 		kids = first_level.getChildren()
-		last_kid_fd = None	#This is a marker for allowing multi instances
-												#beneath a fencedev
+
+		# This is a marker for allowing multi instances
+		# beneath a fencedev
+		last_kid_fd = None
+
 		for kid in kids:
 			instance_name = kid.getName().strip()
 			try:
 				fd = getFDForInstance(fds, instance_name)
 			except:
-				fd = None #Set to None in case last time thru loop
+				# Set to None in case last time thru loop
+				fd = None
 				continue
 
 			if fd is not None:
-				if fd.isShared() is False:	#Not a shared dev...build struct and add
+				if fd.isShared() is False:
+					# Not a shared dev... build struct and add
 					fencedev = {}
+
 					try:
 						fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
 					except:
@@ -1223,30 +1210,37 @@
 					kees = kidattrs.keys()
 					for kee in kees:
 						if kee == 'name':
-							continue #Don't duplicate name attr
+							# Don't duplicate name attr
+							continue
 						fencedev[kee] = kidattrs[kee]
-					#This fencedev struct is complete, and needs to be placed on the
-					#level1 Q. Because it is non-shared, we should set last_kid_fd
-					#to none.
+
+					# This fencedev struct is complete, and needs
+					# to be placed on the level1 Q. Because it is
+					# non-shared, we should set last_kid_fd to none.
 					last_kid_fd = None
 					level1.append(fencedev)
-				else:	#This dev is shared
-					if (last_kid_fd is not None) and (fd.getName().strip() == last_kid_fd['name'].strip()):	#just append a new instance struct to last_kid_fd
+				else:
+					# This dev is shared
+					if (last_kid_fd is not None) and (fd.getName().strip() == last_kid_fd['name'].strip()):
+						# just append a new instance struct to last_kid_fd
 						instance_struct = {}
 						instance_struct['id'] = str(minor_num)
 						minor_num = minor_num + 1
 						kidattrs = kid.getAttributes()
 						kees = kidattrs.keys()
+
 						for kee in kees:
-							if kee == 'name':
-								continue
-							instance_struct[kee] = kidattrs[kee]
-						#Now just add this struct to last_kid_fd and reset last_kid_fd
+							if kee != 'name':
+								instance_struct[kee] = kidattrs[kee]
+
+						# Now just add this struct to last_kid_fd
+						# and reset last_kid_fd
 						ilist = last_kid_fd['instance_list']
 						ilist.append(instance_struct)
-						#last_kid_fd = fd
 						continue
-					else: #Shared, but not used above...so we need a new fencedev struct
+					else:
+						# Shared, but not used above...so we need
+						# a new fencedev struct
 						fencedev = {}
 						try:
 							fencedev['prettyname'] = FENCE_OPTS[fd.getAgentType()]
@@ -1268,16 +1262,17 @@
 						kidattrs = kid.getAttributes()
 						kees = kidattrs.keys()
 						for kee in kees:
-							if kee == 'name':
-								continue
-							instance_struct[kee] = kidattrs[kee]
+							if kee != 'name':
+								instance_struct[kee] = kidattrs[kee]
+
 						inlist.append(instance_struct)
 						level1.append(fencedev)
 						last_kid_fd = fencedev
 						continue
 		fence_map['level1'] = level1
 
-		#level1 list is complete now, but it is still necessary to build shared1
+		# level1 list is complete now, but it is still
+		# necessary to build shared1
 		for fd in fds:
 			isUnique = True
 			if fd.isShared() is False:
@@ -1299,7 +1294,7 @@
 				shared1.append(shared_struct)
 		fence_map['shared1'] = shared1
 
-	#YUK: This next section violates the DRY rule, :-(
+	# YUK: This next section violates the DRY rule, :-(
 	if len_levels >= 2:
 		second_level = levels[1]
 		kids = second_level.getChildren()
@@ -1475,33 +1470,28 @@
 
 def getVMInfo(self, model, request):
 	vm_map = {}
+	fvars = GetReqVars(request, [ 'clustername', 'servicename', 'URL' ])
 
-	try:
-		clustername = request['clustername']
-	except Exception, e:
-		try:
-			clustername = model.getName()
-		except:
-			return vm_map
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
 
-	svcname = None
-	try:
-		svcname = request['servicename']
-	except Exception, e:
-		try:
-			vmname = request.form['servicename']
-		except Exception, e:
-			return vm_map
+	clustername = fvars['clustername']
+	if clustername is None:
+		clustername = model.getName()
+
+	svcname = fvars['servicename']
+	if svcname is None:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('getVMInfo0: no service name')
+		return vm_map
 
 	vm_map['formurl'] = '%s?clustername=%s&pagetype=29&servicename=%s' \
-		% (request['URL'], clustername, svcname)
+		% (baseurl, clustername, svcname)
 
 	try:
-		vm = model.retrieveVMsByName(vmname)
-	except:
+		vm = model.retrieveVMsByName(svcname)
+	except Exception, e:
 		if LUCI_DEBUG_MODE is True:
-			luci_log.debug('An error occurred while attempting to get VM %s' \
-				% vmname)
+			luci_log.debug('getVMInfo1: %s: %r %s' % (svcname, e, str(e)))
 		return vm_map
 
 	attrs = vm.getAttributes()
@@ -1511,19 +1501,16 @@
 
 	return vm_map
 
-def getResourcesInfo(model, request):
+def getResourcesInfo(self, model, request):
 	resList = list()
-	baseurl = request['URL']
+	fvars = GetReqVars(request, [ 'clustername', 'URL' ])
+
+	baseurl = fvars['URL'] or LUCI_CLUSTER_BASE_URL
+	if fvars['clustername'] is None:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('getResourcesInfo missing cluster name')
+		return resList
 
-	try:
-		cluname = request['clustername']
-	except:
-		try:
-			cluname = request.form['clustername']
-		except:
-			if LUCI_DEBUG_MODE is True:
-				luci_log.debug_verbose('getResourcesInfo missing cluster name')
-			return resList
 	#CALL LUCICLUSTERINFO
 	return resList
 
--- conga/luci/site/luci/Extensions/Attic/LuciDB.py	2007/05/30 22:06:24	1.1.2.14
+++ conga/luci/site/luci/Extensions/Attic/LuciDB.py	2007/06/07 06:41:05	1.1.2.15
@@ -669,7 +669,7 @@
 				% (clustername, e, str(e)))
 		return None
 
-	if isAdmin(self) or cluster_permission_check(self, cluster_obj):
+	if cluster_permission_check(self, cluster_obj):
 		return cluster_obj
 	return None
 
@@ -713,10 +713,19 @@
 	return allowed_systems(storage)
 
 def check_clusters(self, clusters):
-	user = getSecurityManager().getUser()
-	return filter(lambda x: user.has_permission('View', x[1]), clusters)
+	ret = []
+	try:
+		user = getSecurityManager().getUser()
+		ret = filter(lambda x: user.has_permission('View', x[1]), clusters)
+	except Exception, e:
+		if LUCI_DEBUG_MODE is True:
+			luci_log.debug_verbose('CC0: %r %s' % (e, str(e)))
+	return ret
+
+def cluster_permission_check(self, cluster):
+	if isAdmin(self):
+		return True
 
-def cluster_permission_check(cluster):
 	try:
 		user = getSecurityManager().getUser()
 		if user.has_permission('View', cluster[1]):
@@ -736,8 +745,11 @@
 
 def getRicciAgent(self, clustername, exclude_names=None, exclude_busy=False):
 	try:
-		perm = cluster_permission_check(clustername)
+		perm = cluster_permission_check(self, clustername)
 		if not perm:
+			if LUCI_DEBUG_MODE is True:
+				luci_log.debug_verbose('GRA0: no permission for %s' \
+					% clustername)
 			return None
 	except Exception, e:
 		if LUCI_DEBUG_MODE is True:
--- conga/luci/site/luci/Extensions/Attic/LuciZope.py	2007/05/30 22:06:24	1.1.2.8
+++ conga/luci/site/luci/Extensions/Attic/LuciZope.py	2007/06/07 06:41:05	1.1.2.9
@@ -123,7 +123,6 @@
 		if LUCI_DEBUG_MODE is True:
 			luci_log.debug_verbose('Appending model to request failed: %r %s' \
 				% (e, str(e)))
-		return 'An error occurred while storing the cluster model'
 
 def GetReqVars(req, varlist):
 	ret = {}
--- conga/luci/site/luci/Extensions/Attic/RicciQueries.py	2007/05/30 05:54:02	1.1.2.6
+++ conga/luci/site/luci/Extensions/Attic/RicciQueries.py	2007/06/07 06:41:05	1.1.2.7
@@ -743,12 +743,8 @@
 	if not ret:
 		return None
 
-	cur = ret
-	while len(cur.childNodes) > 0:
-		for i in cur.childNodes:
-			if i.nodeType == xml.dom.Node.ELEMENT_NODE:
-				if i.nodeName == 'var' and i.getAttribute('name') == 'cluster.conf':
-					return i.childNodes[1].cloneNode(True)
-				else:
-					cur = i
+	var_nodes = ret.getElementsByTagName('var')
+	for i in var_nodes:
+		if i.getAttribute('name') == 'cluster.conf':
+			return i.childNodes[0]
 	return None
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/05/30 22:06:24	1.255.2.12
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/06/07 06:41:05	1.255.2.13
@@ -93,16 +93,18 @@
 
 	same_node_passwds = False
 	try:
-		same_node_passwds = 'allSameCheckBox' in request.form
+		same_node_passwds = request.form.has_key('allSameCheckBox')
 	except:
 		same_node_passwds = False
 
-	add_cluster = { 'name': clustername,
-					'shared_storage': shared_storage,
-					'download_pkgs': download_pkgs,
-					'cluster_os': cluster_os,
-					'identical_passwds': same_node_passwds,
-					'check_certs': check_certs }
+	add_cluster = {
+		'name': clustername,
+		'shared_storage': shared_storage,
+		'download_pkgs': download_pkgs,
+		'cluster_os': cluster_os,
+		'identical_passwds': same_node_passwds,
+		'check_certs': check_certs
+	}
 
 	system_list, incomplete, errors, messages = parseHostForm(request, check_certs)
 	add_cluster['nodes'] = system_list
@@ -288,7 +290,8 @@
 		errors.append('Unable to generate cluster creation ricci command')
 		return (False, { 'errors': errors, 'messages': messages })
 
-	error = manageCluster(self, clustername, add_cluster['nodes'], add_cluster['cluster_os'])
+	error = manageCluster(self, clustername,
+				add_cluster['nodes'], add_cluster['cluster_os'])
 	if error:
 		errors.append('Unable to create the cluster Luci database objects')
 		request.SESSION.set('create_cluster', add_cluster)
@@ -305,6 +308,7 @@
 			errors.append(msg)
 			if LUCI_DEBUG_MODE is True:
 				luci_log.debug_verbose(msg)
+
 			if len(batch_id_map) == 0:
 				request.SESSION.set('create_cluster', add_cluster)
 				return (False, { 'errors': errors, 'messages': messages })
@@ -416,20 +420,21 @@
 	except:
 		same_node_passwds = False
 
-	add_cluster = { 'name': clustername,
-					'shared_storage': shared_storage,
-					'download_pkgs': download_pkgs,
-					'cluster_os': cluster_os,
-					'identical_passwds': same_node_passwds,
-					'check_certs': check_certs }
+	add_cluster = {
+		'name': clustername,
+		'shared_storage': shared_storage,
+		'download_pkgs': download_pkgs,
+		'cluster_os': cluster_os,
+		'identical_passwds': same_node_passwds,
+		'check_certs': check_certs
+	}
 
 	system_list, incomplete, errors, messages = parseHostForm(request, check_certs)
 	add_cluster['nodes'] = system_list
-
 	for i in system_list:
 		cur_system = system_list[i]
 
-		cur_host_trusted = 'trusted' in cur_system
+		cur_host_trusted = cur_system.has_key('trusted')
 		cur_host = cur_system['host']
 
 		try:
@@ -595,7 +600,8 @@
 						luci_log.debug_verbose('VACN12: %s: %r %s' \
 							% (cur_host, e, str(e)))
 
-				errors.append('Unable to initiate cluster join for node "%s"' % cur_host)
+				errors.append('Unable to initiate cluster join for node "%s"' \
+					% cur_host)
 				if LUCI_DEBUG_MODE is True:
 					luci_log.debug_verbose('VACN13: %s: %r %s' \
 						% (cur_host, e, str(e)))
@@ -879,7 +885,8 @@
 				return (False, {'errors': [ 'A service with the name %s already exists' % service_name ]})
 		else:
 			if LUCI_DEBUG_MODE is True:
-				luci_log.debug_verbose('vSA4a: unknown action %s' % request.form['action'])
+				luci_log.debug_verbose('vSA4a: unknown action %s' \
+					% request.form['action'])
 			return (False, {'errors': [ 'An unknown action was specified' ]})
 	except Exception, e:
 		if LUCI_DEBUG_MODE is True:
@@ -3174,12 +3181,8 @@
 		if LUCI_DEBUG_MODE is True:
 			luci_log.debug_verbose('GRI1: missing res name')
 		return {}
-
-	#cluname = fvars['clustername']
-	#baseurl = fvars['URL']
-	#CALL
-	return {}
-
+	from LuciClusterInfo import getResourceInfo as gri
+	return gri(model, name)
 
 def serviceRestart(self, rc, req):
 	from LuciClusterActions import RestartCluSvc
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/06/05 05:37:01	1.39.2.7
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/06/07 06:41:05	1.39.2.8
@@ -100,6 +100,7 @@
 PLONE_ROOT = 'luci'
 CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
 STORAGE_FOLDER_PATH = '/luci/systems/storage/'
+LUCI_CLUSTER_BASE_URL = '/luci/cluster/index_html'
 
 # Node states
 NODE_ACTIVE		= '0'
--- conga/luci/site/luci/Extensions/ClusterModel/Attic/ModelBuilder.py	2007/05/30 05:54:02	1.1.2.6
+++ conga/luci/site/luci/Extensions/ClusterModel/Attic/ModelBuilder.py	2007/06/07 06:41:05	1.1.2.7
@@ -505,6 +505,9 @@
     #Find the clusternodes obj and return get_children
     return self.clusternodes_ptr.getChildren()
 
+  def getNodeNames(self):
+    return map(lambda x: x.getName(), self.clusternodes_ptr.getChildren())
+
   def addNode(self, clusternode):
     self.clusternodes_ptr.addChild(clusternode)
     if self.usesMulticast is True:


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]