[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Cluster-devel] [RFC]Some toughts of future enhancements for rgmanager.

Hi all,

I'd like to share with you some toughts on future developments on
rgmanager that I have in mind, just before trying to code something that
isn't right or not approved for various reasons.
These are only toughts that I matured also working on other clusters and
so they can be quite silly. Feel free to say what you really think about
them ;)

What I like:

*) The 2 concepts of service (resource group) and resource and their
clear separation (also if in rgmanager a resource is a service at the
first level) that makes a service a container of N resources.

What I'd like to implement:

*) The ability to freely manage (stop/start/disable) (with clusvcadm or
another program) a single resource inside a service. This will also
bring to the ability of an enhanced management (a resource can be online
only between 9am and 18pm etc...). Also be able to restart only the
resource the needs to be restarted when one of them fails.

Now it's difficult (at least for me) to implement this features using
the current way of rapresentation and resource management. I don't know
how to code the stop of only one of them keeping the implied and forced
dependencies or the above example of a time based resource online. 

How this can be done:

*) A different (for me also more logical) way to explain the
dependencies or other constraints between the resources in a service.
Using the work that is already going on for service dependencies,
implement something similar for the resources inside a service. This
shoule be also simpler as it would only be a require=yes|[no].

[*) Propagate the resources status around the cluster like is already
done for services.]

In this way I can also restart only the single failed resource (and its
deps) instead of all the ones in the service.

This will force to these changes:

*) Only 2 levels (first level: services, second level: resources in that
*) multiple instance resources should have forced an unique name to
clearly define dependencies and manage them.
*) implied dependencies in ocf scripts aren't used anymore.
*) Obviously this new behavior needs to be actived to maintain the
backward compatibility.

Example of possible configuration and behavior:

	<service name="a">
			<dependency name="script:script01">
				<target name="fs:fs01">
				<target name="fs:fs02">
			<dependency name="script:script02">
				<target name="fs:fs01">
				<target name="fs:fs03">
		<ip address=X.X.X.X .../>
		<fs name="fs01" .../>
		<fs name="fs02" .../>
		<fs name="fs03" .../>
		<script name="script01" .../>
		<script name="script02" .../>
	<service name="b">

On a start of service:a it will start:
1) ip:X.X.X.X (to be honest it can be started at any time in the start
2) fs:fs01
3) fs:fs02
4) fs:fs03
5) script:script01
6) script:script02

If script:script02 fails: only it is restarted.

If fs:fs03 fails: stop script:script02, stop fs:fs03, start fs:fs03,
start script:script02

If fs:fs01 fails: stop script:script02, stop script:script01, stop
fs:fs01, start fs:fs01, start script:script01, start script:script02.

P.S. the start/stops of independent resources can be parallelizzed, in
the above fail scenario script01 and script02 can be started and stopped
at the same time.



Simone Gotti

 Email.it, the professional e-mail, gratis per te: http://www.email.it/f
 La Cronaca del Carnevale di Ivrea 2007 visto su www.localport.it: per conoscere il Carnevale, per rivivere lÂ’edizione 2007. Acquistalo on line
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=6431&d=30-4

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]