[Cluster-devel] [PATCH] Fix freeze of cluster-2.03.11
Steven Whitehouse
swhiteho at redhat.com
Wed Apr 22 11:59:38 UTC 2009
Hi,
On Wed, 2009-04-22 at 13:55 +0200, Kadlecsik Jozsef wrote:
> On Tue, 21 Apr 2009, Steven Whitehouse wrote:
>
> > Yes, it doesn't surprise me that you'd see lockups without your patch.
> > To be on the safe side, try moving the call to gfs_sync_page_i into
> > ->delete_inode so that you can do it after the inode lock has already
> > been dropped (and after the state has been set correctly too). It won't
> > harm anything to have that around, but it might slow things down a bit,
>
> What do you think about this patch?
>
> --- gfs-orig/ops_super.c 2009-01-22 13:33:51.000000000 +0100
> +++ gfs/ops_super.c 2009-04-22 13:51:06.000000000 +0200
> @@ -49,7 +49,7 @@
> }
>
> /**
> - * gfs_drop_inode - drop an inode
> + * gfs_delete_inode - delete an inode
> * @inode: The inode
> *
> * If i_nlink is zero, any dirty data for the inode is thrown away.
> @@ -58,19 +58,19 @@
> */
>
> static void
> -gfs_drop_inode(struct inode *inode)
> +gfs_delete_inode(struct inode *inode)
> {
> struct gfs_sbd *sdp = get_v2sdp(inode->i_sb);
> struct gfs_inode *ip = get_v2ip(inode);
>
> - atomic_inc(&sdp->sd_ops_super);
> -
> if (ip &&
> !inode->i_nlink &&
That looks much better, but you don't need to test for !inode->i_nlink
as this function only gets called if that is the case.
> S_ISREG(inode->i_mode) &&
> !sdp->sd_args.ar_localcaching)
> gfs_sync_page_i(inode, DIO_START | DIO_WAIT);
I still wonder whether gfs2_sync_page_i is really needed here at all,
but its probably safer to keep it for now I guess.
> - generic_drop_inode(inode);
> +
> + truncate_inode_pages(&inode->i_data, 0);
> + clear_inode(inode);
> }
>
> /**
> @@ -443,7 +443,7 @@
>
> struct super_operations gfs_super_ops = {
> .write_inode = gfs_write_inode,
> - .drop_inode = gfs_drop_inode,
> + .delete_inode = gfs_delete_inode,
> .put_super = gfs_put_super,
> .write_super = gfs_write_super,
> .write_super_lockfs = gfs_write_super_lockfs,
>
> I'll be able to test it (or it's successor) around the next week.
>
> Best regards,
> Jozsef
Ok, if we here some positive test results, then we can put this in.
Thanks,
Steve.
More information about the Cluster-devel
mailing list