[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] pool and LVM, and changes since 2000



On Wed, Oct 06, 2004 at 10:36:41AM -0400, Ed L Cashin wrote:
>   The memexp locking module that was new at the time of the 2000 OLS
>   talk was designed to use RAM exported by fancy storage hardware for
>   coordinating locking.  A single node could stand in, though, taking
>   the place of the fancy RAM-exporting storage hardware.  Today, most
>   GFS installations use DLM instead.

Almost.  Most gfs installations today use gulm, which is a fail-over
equiped lock server.  We have a DLM that will be available later.
(usable now from cvs if you want.)


>   Preslan mentions that after acquiring a lock, a node must "heartbeat
>   the drive" because the locking state is on the storage hardware.

Back when we kept trying to put the locking onto the harddrives, there
wasn't any cluster managers, but you still needed to track when nodes
died.  Dlock, for example had heartbeat timers per lock. (dlock was
before dmep)  With dmep, things were done a bit differently, but it was
the same idea.

> How is that done these days?  Does a lock owner heartbeat the lock
> master or does cluster management take care of this issue?

A cluster manager takes care of this now.  The core portion of gulm
tracks membership of the cluter.  For the DLM, we have a cluster manager
named cman.


-- 
Michael Conrad Tadpol Tilstra
Are they gonna debug the world before release?

Attachment: pgp00001.pgp
Description: PGP signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]