[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Cluster-devel] gfs2: skip dlm_unlock calls in unmount



Hi,

On Wed, 2012-11-07 at 14:14 -0500, David Teigland wrote:
> When unmounting, gfs2 does a full dlm_unlock operation on every
> cached lock.  This can create a very large amount of work and can
> take a long time to complete.  However, the vast majority of these
> dlm unlock operations are unnecessary because after all the unlocks
> are done, gfs2 leaves the dlm lockspace, which automatically clears
> the locks of the leaving node, without unlocking each one individually.
> So, gfs2 can skip explicit dlm unlocks, and use dlm_release_lockspace to
> remove the locks implicitly.  The one exception is when the lock's lvb is
> being used.  In this case, dlm_unlock is called because it may update the
> lvb of the resource.
> 

I'm wondering just how much we are likely to gain from this.... we
currently use LVBs for both quota (and more recently) rgrp too. If we
were to start using the LVBs for inodes and/or iopen locks eventually
then that would seem to rather reduce the benefits of this.

The other question is what the cost of conversion to NL vs unlock of an
NL lock is. Even with the patch we are still iterating over each lock to
do a conversion to NL in any case where the lock is not already in NL.
So all we are saving is the final NL -> unlocked change.

One thought is whether it would not be better to do a direct "whatever"
-> unlocked change in the first place, rather than splitting the
operation into two parts.

> Signed-off-by: David Teigland <teigland redhat com>
> ---
>  fs/gfs2/glock.c    |    1 +
>  fs/gfs2/incore.h   |    1 +
>  fs/gfs2/lock_dlm.c |    6 ++++++
>  3 files changed, 8 insertions(+)
> 
> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
> index e6c2fd5..f3a5edb 100644
> --- a/fs/gfs2/glock.c
> +++ b/fs/gfs2/glock.c
> @@ -1528,6 +1528,7 @@ static void dump_glock_func(struct gfs2_glock *gl)
>  
>  void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
>  {
> +	set_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags);
>  	glock_hash_walk(clear_glock, sdp);
>  	flush_workqueue(glock_workqueue);
>  	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
> diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
> index 3d469d3..67a39cf 100644
> --- a/fs/gfs2/incore.h
> +++ b/fs/gfs2/incore.h
> @@ -539,6 +539,7 @@ enum {
>  	SDF_DEMOTE		= 5,
>  	SDF_NOJOURNALID		= 6,
>  	SDF_RORECOVERY		= 7, /* read only recovery */
> +	SDF_SKIP_DLM_UNLOCK	= 8,
>  };
>  
>  #define GFS2_FSNAME_LEN		256
> diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
> index 0fb6539..efd0fb6 100644
> --- a/fs/gfs2/lock_dlm.c
> +++ b/fs/gfs2/lock_dlm.c
> @@ -289,6 +289,12 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
>  	gfs2_glstats_inc(gl, GFS2_LKS_DCOUNT);
>  	gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT);
>  	gfs2_update_request_times(gl);
> +
> +	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) && !gl->gl_lvb[0]) {
There is no guarantee that just because the first byte of the lvb is 0
it is not in use. So I'd suggest adding a flag to each struct gfs2_glops
to say whether the lvb is used by this type of glock or not and then
using that instead of this test.

We assume though that it is only ok to write the LVB when the lock is in
higher lock modes, so that it is not going to have changed since the
last lock state change (to NL in this case) so maybe we don't even need
to care about this? Or at least not unless we merge the two state
changes as mentioned above.

> +		gfs2_glock_free(gl);
> +		return;
> +	}
> +
>  	error = dlm_unlock(ls->ls_dlm, gl->gl_lksb.sb_lkid, DLM_LKF_VALBLK,
>  			   NULL, gl);
>  	if (error) {

Steve.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]