[Cluster-devel] [PATCH] gfs2: skip dlm_unlock calls in unmount

Steven Whitehouse swhiteho at redhat.com
Wed Nov 14 10:38:11 UTC 2012


Hi,

Now pushed to the -nmw git tree. Thanks,

Steve.

On Tue, 2012-11-13 at 10:58 -0500, David Teigland wrote:
> When unmounting, gfs2 does a full dlm_unlock operation on every
> cached lock.  This can create a very large amount of work and can
> take a long time to complete.  However, the vast majority of these
> dlm unlock operations are unnecessary because after all the unlocks
> are done, gfs2 leaves the dlm lockspace, which automatically clears
> the locks of the leaving node, without unlocking each one individually.
> So, gfs2 can skip explicit dlm unlocks, and use dlm_release_lockspace to
> remove the locks implicitly.  The one exception is when the lock's lvb is
> being used.  In this case, dlm_unlock is called because it may update the
> lvb of the resource.
> 
> Signed-off-by: David Teigland <teigland at redhat.com>
> ---
>  fs/gfs2/glock.c    |    1 +
>  fs/gfs2/incore.h   |    1 +
>  fs/gfs2/lock_dlm.c |    8 ++++++++
>  3 files changed, 10 insertions(+)
> 
> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
> index e6c2fd5..f3a5edb 100644
> --- a/fs/gfs2/glock.c
> +++ b/fs/gfs2/glock.c
> @@ -1528,6 +1528,7 @@ static void dump_glock_func(struct gfs2_glock *gl)
>  
>  void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
>  {
> +	set_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags);
>  	glock_hash_walk(clear_glock, sdp);
>  	flush_workqueue(glock_workqueue);
>  	wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0);
> diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
> index 3d469d3..67a39cf 100644
> --- a/fs/gfs2/incore.h
> +++ b/fs/gfs2/incore.h
> @@ -539,6 +539,7 @@ enum {
>  	SDF_DEMOTE		= 5,
>  	SDF_NOJOURNALID		= 6,
>  	SDF_RORECOVERY		= 7, /* read only recovery */
> +	SDF_SKIP_DLM_UNLOCK	= 8,
>  };
>  
>  #define GFS2_FSNAME_LEN		256
> diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
> index 0fb6539..f6504d3 100644
> --- a/fs/gfs2/lock_dlm.c
> +++ b/fs/gfs2/lock_dlm.c
> @@ -289,6 +289,14 @@ static void gdlm_put_lock(struct gfs2_glock *gl)
>  	gfs2_glstats_inc(gl, GFS2_LKS_DCOUNT);
>  	gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT);
>  	gfs2_update_request_times(gl);
> +
> +	/* don't want to skip dlm_unlock writing the lvb when lock is ex */
> +	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
> +	    gl->gl_state != LM_ST_EXCLUSIVE) {
> +		gfs2_glock_free(gl);
> +		return;
> +	}
> +
>  	error = dlm_unlock(ls->ls_dlm, gl->gl_lksb.sb_lkid, DLM_LKF_VALBLK,
>  			   NULL, gl);
>  	if (error) {





More information about the Cluster-devel mailing list