[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Re: Interfacing csnap to cluster stack

On Tue, 2004-10-05 at 14:03 -0400, Daniel Phillips wrote:

> The idea is, there is a service manager out there somewhere that keeps 
> track of how many instances of a service of a given type currently 
> exist, and has some way of creating new resource instances if needed, 
> or killing off extra ones.

Well, "exactly one" is what the rgmanager code is supposed to do.
"Multiple of one instance" is supported, but only if the resource is
defined in multiple groups.

It would not be terribly difficult to add "multiple instances of groups"

> In our case, we want exactly one instance 
> of a csnap server.  We need not only to specify that constraint somehow 
> and get a  connection to it, but we need to supply a method of starting 
> a csnap server.  So csnap-agent will be a client of service manager and 
> an agent of resource manager.

So, in the <special tag="rgmanager"> element, add:

   <attributes maxinstances="1"/>

To the OCF metadata output from your resource agent.

Another method to assure you only have one resource incarnation running
as 'master' with lots of 'slaves' is to not use rgmanager at all.  All
incarnations of that server take a cluster lock; only one gets it.  When
that node/process/etc dies, the next guy in line will take the lock and
start operating as 'master'.

> We won't talk to either service manager or resource manager directly, 
> but go through Lon's Magma library, which is supposed to provide a nice 
> stable api for us to work with, regardless of whether particular 
> services reside in kernel or user space, or are local or remote.  Lon 
> has said that he will adapt the Magma api as we go, if we break 
> anything or run into limitations.  

Wait, so, no resource agent?  Or is this just how you mean to have it be
a client of the Service Manager?

> (I suppose that is why it is called 
> Magma, it flows under pressure.)

Actually, it's because the cluster software was once called "bedrock".
Magma can become different kinds of bedrock, depending on the
composition and how long it takes to cool.  For instance, I believe
basalt and granite (IIRC) have similar compositions.  Granite, however,
cools slower, so the crystals are bigger and the texture is different.

> Magma receives requests by direct library calls and supplies answers 
> either via function returns or via events delivered over a socket 
> connection, which seems to be a pretty good fit with the way csnap does 
> things.  So now, what are we going to ask it, and how is it going to 
> answer?

Those events/requests (i.e. non-magmamsg stuff) are for the cluster
infrastructure, obviously.  Magma can not generate events currently
(except in the case of logging out of a service group, but that's
indirect, and not supported by GuLM).

>   2. Register to act as an agent to start a snapshot server instance

> My instinct is that we do not want 1. to be a blocking call into Magma, 
> that returns only when it has a server instance, because we may want 
> our agent to be able to service other events while it waits for its 
> server address.  So the likely interface is to call magma, saying what 
> kind of server we want, and wait for the address to arrive as an event.

I suppose the simplest mechanism to add event delivery to magma is to
add calls for sending/receiving actual messages across the sockets
(through the cluster infrastructure), but the GuLM plugin won't support
it (it doesn't have the group communications that CMAN/DLM does).

> Magma doesn't actually know anything about what we're asking it, it only 
> knows how to pass on requests to somebody who does.  So we're actually 
> talking to service manager and resource manager through Magma, and 
> presumably they talk to each other as well, because service manager 
> must ask resource manager to create or kill off resource instances on 
> its behalf.

Magma doesn't know about the resource manager either at this point.  We
could add plugins for resource managers if we need to, but it may not be

> Anyway, csnap-agent is mainly going to be talking to service manager 
> through Magma, but it also needs to tell resource manager about our 
> resource, its constraints and how to set itself up as an agent to 
> create it.  I don't have a clear picture of how this works at the 
> moment, and that is the point of this email.

Side note... Anyone think it would make sense to have magma's msg_listen
() call use the portmapper instead of accepting a port directly?

> For example, how do we specify the service manager constraints, i.e., 
> "exactly one" in this case: before we request the instance, or as part 
> of the request, or in a configuration file somewhere?

Given that rgmanager can handle the only one/exactly one/etc., you don't
need the SM.  The resource agent operations would map to these csnap
server functions:

(1) start: "Start if needed; Become csnap server master"
(2) stop: "Stop being csnap server master"
(3) recover: "Do online, but keep being master" (if supported)
(4) reload: "Reconfigure csnap master server with these new parameters"
(5) status/monitor level 0: Am I running?
    level 10: am I master?
    level 20: is everything internally consistent?

The failover domain for the resource group which contains the "CSNAP
Master Resource" would be the set of nodes running other instances of
the csnap server in "slave" mode; it would be up to the administrator to
ensure that csnap-slave is started at system boot time using

Alternatively, we can add back use of the SM and use the service group
to specify the failover domain.  Rgmanager does not support this
currently, but it would be easy to add since magma can already determine
the members of a service group.  This, however, will break on GuLM.

This has the added (admittedly theoretical) advantage of working on
other cluster resource managers implementing the OCF RA API.

-- Lon

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]