[Cluster-devel] [GFS2] Put back semaphore to avoid umount problem

Steven Whitehouse swhiteho at redhat.com
Mon Jan 29 11:51:45 UTC 2007


Hi,

On Fri, 2007-01-26 at 09:37 -0600, David Teigland wrote:
> On Fri, Jan 26, 2007 at 10:36:58AM +0000, Steven Whitehouse wrote:
> > Hi,
> > 
> > Hmm. This doesn't seem to be quite the thing:
> 
> Yep, glock_put is being called from within invalidate_inodes, so we're
> recursively taking the new sem.  Here's the original comment I added
> describing the problem:
> 
> /* invalidate_inodes() requires that the sb inodes list
>    not change, but an async completion callback for an
>    unlock can occur which does glock_put() which
>    can call iput() which will change the sb inodes list.
>    invalidate_inodes_mutex prevents glock_put()'s during
>    an invalidate_inodes() */
> 
> So, we're trying to prevent async completions from mucking with the inodes
> while we're in invalidate_inodes.  Perhaps we could take the new read sem
> inside gfs2_glock_cb which still blocks async completions when we need to
> but not in a code path that's called from elsewhere.
> 

Yes, that seems reasonable for now. Patch below (which I've tested and
appears to work fine). It also has the advantage of getting the read
side of this lock out of the glock_put fast path:

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index c070ede..6618c11 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -20,6 +20,7 @@ #include <linux/gfs2_ondisk.h>
 #include <linux/list.h>
 #include <linux/lm_interface.h>
 #include <linux/wait.h>
+#include <linux/rwsem.h>
 #include <asm/uaccess.h>
 
 #include "gfs2.h"
@@ -45,6 +46,7 @@ static int dump_glock(struct gfs2_glock 
 static int dump_inode(struct gfs2_inode *ip);
 static void gfs2_glock_xmote_th(struct gfs2_holder *gh);
 static void gfs2_glock_drop_th(struct gfs2_glock *gl);
+static DECLARE_RWSEM(gfs2_umount_flush_sem);
 
 #define GFS2_GL_HASH_SHIFT      15
 #define GFS2_GL_HASH_SIZE       (1 << GFS2_GL_HASH_SHIFT)
@@ -1578,12 +1580,14 @@ void gfs2_glock_cb(void *cb_data, unsign
 		struct lm_async_cb *async = data;
 		struct gfs2_glock *gl;
 
+		down_read(&gfs2_umount_flush_sem);
 		gl = gfs2_glock_find(sdp, &async->lc_name);
 		if (gfs2_assert_warn(sdp, gl))
 			return;
 		if (!gfs2_assert_warn(sdp, gl->gl_req_bh))
 			gl->gl_req_bh(gl, async->lc_ret);
 		gfs2_glock_put(gl);
+		up_read(&gfs2_umount_flush_sem);
 		return;
 	}
 
@@ -1828,7 +1832,9 @@ void gfs2_gl_hash_clear(struct gfs2_sbd 
 			t = jiffies;
 		}
 
+		down_write(&gfs2_umount_flush_sem);
 		invalidate_inodes(sdp->sd_vfs);
+		up_write(&gfs2_umount_flush_sem);
 		msleep(10);
 	}
 }





More information about the Cluster-devel mailing list