[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] Re: Csnap server cluster membership interface

On Wednesday 20 October 2004 15:17, Benjamin Marzinski wrote:
> On each machine, there is one agent per csnap device. right?

That depends whether by "device" you meant "virtual csnap device", in 
which case, no, there is one agent for all the virtual csnap devices 
defined on top of the underlying snapshot store+origin devices.  If by 
"csnap device" you meant "the device that is being snapshotted", then 
it's correct.

The algorithm depends on there being exactly one agent per cluster node.

> >   - Once an agent succeeds in getting the exclusive on the snapshot
> >     store lock, it sends a "new server" message to the csnap agent
> >     on every node (alternatively, to every member of the "csnap"
> >     service group, see below).
> How does the agent know the ip addresses of all client nodes if not
> through cman?

It sends this message through cman.

> Even through cman, is there an easy way to get the ip 
> address from the cman information?

You don't have to worry about that, cman provides a message sending 
interface.  (It uses Patrick's custom "PF_CLUSTER" socket protocol.  We 
should get Patrick to wax poetic about this gizmo.)

> Or were the agents going to use cluster sockets.


> Do all the agents wait on a specific port for these 
> external connections?

Let me see, That's the only way, as far as I can see, i.e., to have a 
well known "agent" port.

> If there is only one service group for all csnap devices,

One service group for all csnap agents, which might be less than or more 
than the number of virtual csnap devices, since each agent services 
zero or more clients, all on the same node.  If it were otherwise, we'd 
have a messy problem addressing the agents, since service groups are 
just collections of nodes.  Maybe we should be able to send messages by 
node:service, or maybe we already can and I just didn't notice how.

As far as I can see, the cluster api is defined only in 
cman-kernel/src/cnxman-socket.h, which does not provide for sending 
messages to service group members per se, only to nodes.

It's remotely possible that you could have multiple different physical 
snapshot+origin devices in the device mapper stack, each with its own 
collection of snapshot and/or origin virtual devices sitting on top of 
it.  I currently handle this by having multiple agents on the node.  
They would then belong to different service groups, and (I think) they 
can all bind to the same cluster port and all of them would receive any 
incoming broadcast message.  Then we'd have to sort out which one of 
them was really supposed to receive the message.

Or we can teach the agent to handle multiple different physical snapshot 
store devices (and hence multiple servers), which wouldn't be too hard.

It wouldn't be too horrible to restrict things to one agent per node for 
the time being, and rely on that.

> but a different csnap agent per client, what happens if a node
> doesn't use all the csnap devices? It would seem in this case that 
> according to the service group, it would need to respond, but there
> wouldn't be an agent to contact. correct? To avoid this you might
> need to have one service group per csnap device, or one agent that
> handles all csnap devices on a node.

The "one agent per node" principle addresses this, with the caveat that 
multiple physical snapshot store devices per node are possible, but 
that's a different problem from the one you're worried about.  It would 
be best to chew on that one a while, and proceed on the assumption for 
now that there is exactly one agent per node, in which case we don't 
need a service group at all.

The need for a service group comes up if you want to support exporting 
csnap devices so that only some of the node members are running the 
csnap target, an arrangement that is probably going to give our gui and 
clvm guys heart attacks anyway.

A separate reason for using a service group is that we then get a nice 
little recovery interface to work with, which might come in handy, and 
in any case needs to be exercised by something besides dlm.

Yet another reason for having a service group is, it gives us a way of 
shutting down csnap services (by leaving the service group) before our 
node actually leaves the cluster.  I'm not sure there's any other way 
of doing this other than making the user take care of it.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]