[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re[2]: [Linux-cluster] DLM behavior after lockspace recovery

Saturday, October 16, 2004, 4:20:07 PM, Daniel Phillips wrote:

> On Saturday 16 October 2004 02:40, David Teigland wrote:
>> On Fri, Oct 15, 2004 at 12:41:16PM -0400, Daniel Phillips wrote:
>> > I hope you see why it is in general, bad to lie about the integrity
>> > of data.
>> Still incorrect.  Check the facts and try again.

> Perhaps you'd strengthen your argument by stating the facts as you see
> them.

> Regards,

> Daniel

In your example of a counter which tracks the # of operations
in progress, regenerating the LVB value during failover from
the last known good value among the surviving nodes doesn't
do any good. There is no way to avoid recalculating the correct
value during the failover process.

OTOH, in my example where the value in the lock value block
is used as a block version # it makes perfect sense to use
the last known value from the surviving nodes. The surviving
nodes have a copy of this value which tags the blocks in their
cache. When they acquire a lock they check the value in the
LVB against the value in memory. If they match, there's no need
to re-read the block from disk. When a node gets a lock with the
VALNOTVALID status if it knows that the value in the LVB is the
most recent value seen by any of the surviving cluster members then
it knows it can increment that value and the result will be greater
than the value any of the nodes have stored in memory.

Another example is a lock who's LVB doesn't change once it has
been initialized. In this case it doesn't matter whether the
value block is marked invalid or not. The contents are still

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]