[Cluster-devel] [PATCH 4 of 5] Bz #248176: GFS2: invalid metadata block - REVISED

Wendy Cheng wcheng at redhat.com
Thu Aug 9 18:21:31 UTC 2007


Bob Peterson wrote:
> On Thu, 2007-08-09 at 09:46 -0400, Wendy Cheng wrote:
>   
>> Set aside "after this patch, the problem goes away" thing ...
>>
>> I haven't checked previous three patches yet so I may not have the 
>> overall picture ... but why adding the journal flush spin lock here 
>> could prevent the new inode to get re-used before its associated buffer 
>> are flushed to the logs ? Could you elaborate more ?
>>
>>     
>>> +		down_write(&sdp->sd_log_flush_lock);
>>>  		block = rgblk_search(rgd, goal, GFS2_BLKST_UNLINKED,
>>>  				     GFS2_BLKST_UNLINKED);
>>> +		up_write(&sdp->sd_log_flush_lock);
>>>       
>
> IIRC, if we don't protect rgblk_search from finding GFS2_BLKST_UNLINKED
> blocks, a "deleted" inode may be returned to function
> gfs2_inplace_reserve_i which will do an iput on the inode,
> which may reference buffers that are being flushed to disk.
> If almost all blocks in that bitmap are allocated, I think the
> deleted block may sometimes be reused and the buffer 
> associated with the reused block may be changed before it's
> actually written out to disk.
>   

Log flushing is an asynchronous event. I still don't see how this can 
*protect* the condition you just described (i.e., prevents the block 
being assigned to someone else before log flush occurs).  Or do I 
understand your statement right (i.e., the log flushing must occur 
before the block is used by someone else) ? It may *reduce* the 
possibility (if log flushing happens at the same time as this 
assignment) but I don't see how it can *stop* the condition.

You may "reduce" the (rare) possibility but the real issue is still 
hanging there ? If this is true, then I don't agree we have to pay the 
price of moving a journal flushing lock into resource handling code.

-- Wendy




More information about the Cluster-devel mailing list