[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Interfacing csnap to cluster stack

On Fri, Oct 08, 2004 at 12:59:55AM -0400, Daniel Phillips wrote:
> On Thursday 07 October 2004 23:56, David Teigland wrote:

> > "- I think it's possible that a client-server-based csnap system
> > could be managed by SM (directly) if made to look and operate more
> > symmetrically. This would eliminate RM from the picture."

> If you think only of csnap agents and forget for the moment about device 
> mapper targets and servers, the agents seem to match the service group 
> model quite well.  There is one per node, and each provides the service 
> "able to launch a csnap server".  The recovery framework seems useful 
> for ensuring that a server is never launched on a node that has left 
> the cluster.  How to choose a good candidate node is still an open 
> question, but starting Lon's "cute" proposal to use gdlm to both choose 
> a candidate and ensure that the server is unique will certainly get 
> something working.  In the long run, taking an EX lock on the snapshot 
> store seems like a very good thing for a server to do.  This gets the 
> resource manager off the critical (development) path.

OK, so let's construct how SM-based csnap clustering might look:

- we start out with a bunch of nodes in a cluster:  A, B, C, D

- there is nothing unique about any of these nodes per se

- the userland csnap-agent program is started (say manually) on each node

- csnap-agent on each node joins the "csnap" service group

- you now have all the csnap-agent programs forming the "csnap" service
  group which has members A, B, C, D

- each csnap-agent program knows the "csnap" group membership (it knows
  who all the others are)

- each csnap-agent program will be notified if the membership of the
  "csnap" service group is changing (i.e. if A, B, C or D fails or
  shuts down or if csnap-agent on E joins "csnap".)

- at this point, the csnap system still looks symmetric, there is nothing
  unique about any of the csnap-agent programs

- the csnap-agent programs on the four nodes decide that a server is

- the csnap-agent programs decide that one of them (A, B, C or D) will run
  the server, but which one?

- the easiest method would be for them all to agree to pick the member
  with the lowest node id to run the server.  (An alternative would be
  for them to use DLM locks to arbitrate for the server role.)

- say A has the lowest node id, so csnap-agent on A starts a csnap server

- csnap-agent on each node can now tell the local csnap client where the
  server is at (namely A)

> Besides the server instantiation question, there is another problem that 
> needs solving: when a snapshot server fails over to a new server, the 
> new server must be sure that every client that was connected to the old 
> server has either reconnected to the new server or left the cluster.

Say that node A in the example above fails.

- csnap-agent on B, C, and D are notified that A is dead

- csnap-agent on B, C and D probably suspend the local csnap client

- B, C and D know that A was the server and that they now need to select
  another one among themselves to start a server.

- using the same method they select B which starts the csnap server

- at this point recovery is basically done:  a new server is running on
  B, ready to accept requests and all the nodes know where the new server
  is at.  csnap-agent on all the nodes tells SM that it's done with

- Once SM sees that recovery is done on all the nodes (B, C and D), it
  sends csnap-agent on each node another notification to this effect.

- csnap-agent on B, C and D now allows the local csnap client to resume
  activity (telling the client now or possibly earlier where the new
  server is at)

> Csnap clients don't map directly onto nodes, so cnxman can't directly 
> track the csnap client list, however it can provide membership change 
> events that the server (or alternatively, agents) can use to maintain 
> the list of currently connected clients.  (The server doesn't need help 
> adding new clients to the list, but it needs to be told when a node has 
> left the cluster, so it can strike the clients belonging to that node 
> off the list, and disconnect them for good measure.  It could also 
> refuse connections from clients not on cluster nodes.)

Now say that node D fails (D was not running the server, only a client).

- csnap-agent on B and C are notified that D is dead

- csnap-agent on C probably doesn't care

- csnap-agent on B (where the server is running) can tell the server
  that D is dead and should not be listened to

So this is an outline of how you could do the /clustering/ aspect of csnap
symmetrically, using SM.  You don't need a RM with this method.  Again, I
don't want to lobby for this particular SM-based approach as there may be
reasons other people have for doing it differently.

The fine print:

The outline at the very beginning was a bit contrived to get things
started.  You wouldn't have four csnap-agents in the group who suddenly
decide a new server is needed.  You start with one node in the group who
automatically starts the server being the only one, then you add others to
the group.  These csnap-agents discover as they are added where the server
is running (i.e. the member with the lowest node id).

There's another point I'm glossing over that may be worth mentioning.  The
"lowest id" basis for csnap-agent determining a server is only partly
correct.  It's sufficient for selecting a new server, but not sufficient
for discovering a running server...

A new node with the lowest id of anyone can be added to an existing group;
suddenly the node with the lowest id is not the one running the server.
So, csnap-agent needs an alternative (or at least an enhancement for
discovery) to the lowest node id method (DLM locks are one method,
messages sent among csnap-agent's is another.)

Dave Teigland  <teigland redhat com>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]