[Cluster-devel] Re: [PATCH 2/2] configfs: Rework configfs_depend_item() locking and make lockdep happy

Joel Becker Joel.Becker at oracle.com
Wed Apr 29 18:52:31 UTC 2009


On Wed, Jan 28, 2009 at 07:18:33PM +0100, Louis Rilling wrote:
> configfs_depend_item() recursively locks all inodes mutex from configfs root to
> the target item, which makes lockdep unhappy. The purpose of this recursive
> locking is to ensure that the item tree can be safely parsed and that the target
> item, if found, is not about to leave.
> 
> This patch reworks configfs_depend_item() locking using configfs_dirent_lock.
> Since configfs_dirent_lock protects all changes to the configfs_dirent tree, and
> protects tagging of items to be removed, this lock can be used instead of the
> inodes mutex lock chain.
> This needs that the check for dependents be done atomically with
> CONFIGFS_USET_DROPPING tagging.

	These patches are now in the 'lockdep' branch of the configfs
tree.  I'm planning to send them in the next merge window.  I've made
one change.

> + * Note: items in the middle of attachment start with s_type = 0
> + * (configfs_new_dirent()), and configfs_make_dirent() (called from
> + * create_dir()) sets s_type = CONFIGFS_DIR|CONFIGFS_USET_CREATING. In both
> + * cases the item is ignored. Since s_type is an int, we rely on the CPU to
> + * atomically update the value, without making configfs_make_dirent() take
> + * configfs_dirent_lock.

	I've added configfs_dirent_lock in configfs_make_dirent(),
because it is not safe at all to rely on the fact that s_type is an int.
It's an atomic set on one CPU, but there's no guarantee that it's seen
correctly on other CPUs.  Plus, there's no real need for speed here.  So
we properly take configfs_dirent_lock around s_type in
configfs_make_dirent(), and that ensures we see things correctly on SMP.

Joel

-- 

"Three o'clock is always too late or too early for anything you
 want to do."
        - Jean-Paul Sartre

Joel Becker
Principal Software Developer
Oracle
E-mail: joel.becker at oracle.com
Phone: (650) 506-8127




More information about the Cluster-devel mailing list