[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Cluster-devel] logsys in cluster3



On Mon, 30 Jun 2008, David Teigland wrote:

On Mon, Jun 30, 2008 at 06:38:48PM +0200, Fabio M. Di Nitto wrote:
On Mon, 30 Jun 2008, David Teigland wrote:

- configuration setup: big blocks of setup code are repeated and largely
the same, make this less

I will take care of this bit since i already done it.

the api will look like:

int gimme_logging_config_data(char *name, int debug)

return 0 if ok 1 on failure
char *name is the subsystem name as declared
debug = 0 if no debug override is coming from cli or envvar, 1 if debug
has been forced by cli or envvar.

I still would like to agree on be able to try to config logging as early
as possible and then try later if it fails tho.

This problem would just disappear if we can agree on the other common
cluster connection bit. At that point, we configure once we connect and we
keep logging only the attempts to connect. Everybody is happy everafter ;)

ccs should notify programs
of cluster.conf change,

cman will take care of this since ccs is just a plugin now and cman has
the API there and i see little gain to do it again. Ok?

OK, the details are still a little hazy for starting up a program.  When a
program starts up it needs to interact with cman, ccs and logsys, and all
three of those are somewhat interdependent.


This is almost right.

- setup logsys nominally, so that the cman/ccs setup steps can do logging
 . if this fails, just go on

- connect to cman
 . if this fails, exit (hopefully the nominal logging above worked)

We want to loop here too. Maybe the init script has fired cman, and the next daemon but cman might not have setup sockets yet.

- wait for cman to be fully running
 . do we want everyone to put a finite loop around this?
 . if this fails, exit
 . keep the cman connection open as long as the program is running

Right.

- connect to ccs
 . could this fail even if cman is already ok above?  do we need a
   retry loop here?

Yes it can fail if there are no resources available, if we already have the cman connection, we don't need to loop and we die on error.

 . keep the ccs connection open as long as the program is running

Right.

- read from ccs the optional cluster.conf logging settings
 . if this fails, just go on
 . reconfigure logsys, replacing the nominal config in step 1

Right.

- as the program runs, ccs/cman notifications may arrive indicating
 that cluster.conf has changed. when one of these callbacks arrives:
 . reread the logsys config and modify logging behavior accordingly
 . reread any other dynamic cluster.conf settings

Right.

 . (I assume I poll on the ccs connection fd which tells me when there's
    a change?)

No. You can just install the callback and be done with it. The ccs fd was never a real fd to poll.

Is there anything missing?

Given that this code is going to be re-implemented N times, I suggest again to create a cluster/common/helpers with pre-built objects to just include at linking time (note that we also share and duplicate a lot of header files around and it was in my mind for sometime to create a cluster/common/includes too)

For now i can see that at least the connection sequence and the logsys config are exactly two candidates for being there.

There is also the option to make a brand new shared library called libcluhelpers that IMHO is a bit more clean than just linking objects.

Fabio

--
I'm going to make him an offer he can't refuse.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]