[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Cluster-devel] GFS2: Use list_lru for quota lru list



On Fri, Sep 13, 2013 at 12:44:03PM +0100, Steven Whitehouse wrote:
> 
> This patch is a first go (only compile tested so far) at what list_lru
> support should look like for GFS2's quota subsystem. I've done this one
> rather than the glock lru list first, since this is easier. There are
> still some unresolved issues on the glock side with how to use the new
> list_lru code.
> 
> The conversion is fairly straight forward in this case though, just
> requiring an extra function to conditionally add items to the lru
> list upon an atomic counter hitting zero. The other change is that
> the locking as split into two, one the internal lock in the lru code,
> and the other being a new quota spinlock, which is used for all the
> non-lru locking.

I'm nmot sure I really like the concept of adding reference count
futzing into the list_lru operations. The whole point of th
einternal locking of the list_lru is that it is independent of the
objects being placed on the LRUs. That is, objects cannot use the
internal LRU list locks for serialisation in any manner as it is
purely there for correctness of LRU list internals, not for external
object lifecycle management.

More below.

<trimmed to jus tbe the new code block>

> +static enum lru_status gfs2_qd_walk(struct list_head *item, spinlock_t *lock, void *arg)
>  {
>  	struct gfs2_quota_data *qd;
>  	struct gfs2_sbd *sdp;
>  
> +	qd = list_entry(item, struct gfs2_quota_data, qd_reclaim);
> +	sdp = qd->qd_gl->gl_sbd;
>  
> +	spin_lock(&qd_lock);
> +	if (atomic_read(&qd->qd_count)) {
> +		spin_unlock(&qd_lock);
> +		return LRU_SKIP;
> +	}

The LRU locks are considered "innermost" so you cannot take other
locks inside the lru locks like this. So that needs to be a
spin_trylock()....

>  
> +	/* Free from the filesystem-specific list */
> +	list_del(&qd->qd_list);

LRU lists require list_del_init()

> +	spin_unlock(&qd_lock);
> +	spin_unlock(lock);

You dropped the LRU lock, and hence can only return LRU_RETRY at
this point to indicate that the LRU traversal needs to be restarted.

> +	gfs2_assert_warn(sdp, !qd->qd_change);
> +	gfs2_assert_warn(sdp, !qd->qd_slot_count);
> +	gfs2_assert_warn(sdp, !qd->qd_bh_count);
>  
> +	gfs2_glock_put(qd->qd_gl);
> +	atomic_dec(&sdp->sd_quota_count);
>  
> +	/* Delete it from the common reclaim list */
> +	kmem_cache_free(gfs2_quotad_cachep, qd);

This callback is intended not to be used directly for freeing
objects. The idea is that you pass a dispose list to the callback
and you mark items as "on the dispose list" when you move it to them
so you don't need to drop the LRU lock. Then when the scan returns,
you walk the dispose list and free all the objects...

> +	spin_lock(lock);
> +
> +	return LRU_REMOVED;
> +}
> +
> +unsigned long gfs2_qd_shrink_scan(struct shrinker *shrink,
> +				  struct shrink_control *sc)
> +{
> +	if (!(sc->gfp_mask & __GFP_FS))
> +		return SHRINK_STOP;
> +
> +	return list_lru_walk_node(&gfs2_qd_lru, sc->nid, gfs2_qd_walk, NULL,
> +				  &sc->nr_to_scan);

i.e. here is where you'd dispose of all the items isolated from the
LRU in the isolation callback.

>  static u64 qd2index(struct gfs2_quota_data *qd)
> @@ -177,15 +180,9 @@ static int qd_get(struct gfs2_sbd *sdp, struct kqid qid,
>  
>  	for (;;) {
>  		found = 0;
> +		spin_lock(&qd_lock);
>  		list_for_each_entry(qd, &sdp->sd_quota_list, qd_list) {
>  			if (qid_eq(qd->qd_id, qid)) {
>  				atomic_inc(&qd->qd_count);
>  				found = 1;
>  				break;
> @@ -202,9 +199,10 @@ static int qd_get(struct gfs2_sbd *sdp, struct kqid qid,
>  			new_qd = NULL;
>  		}
>  
> +		spin_unlock(&qd_lock);
>  
>  		if (qd) {
> +			list_lru_del(&gfs2_qd_lru, &qd->qd_reclaim);
>  			if (new_qd) {
>  				gfs2_glock_put(new_qd->qd_gl);
>  				kmem_cache_free(gfs2_quotad_cachep, new_qd);
> @@ -228,12 +226,7 @@ static void qd_hold(struct gfs2_quota_data *qd)
>  
>  static void qd_put(struct gfs2_quota_data *qd)
>  {
> -	if (atomic_dec_and_lock(&qd->qd_count, &qd_lru_lock)) {
> -		/* Add to the reclaim list */
> -		list_add_tail(&qd->qd_reclaim, &qd_lru_list);
> -		atomic_inc(&qd_lru_count);
> -		spin_unlock(&qd_lru_lock);
> -	}
> +	list_lru_add_on_zero(&gfs2_qd_lru, &qd->qd_reclaim, &qd->qd_count);

Why do you need to take the LRU lock atomically here?

It appears to me that you have a situation where the reference count
of zero means "on the LRU list", but there's nothing in qd_get()
that enforces the LRU removal only occurs for the first reference
that is taken. Indeed, why do you even need to remove the item from
the LRU list when you get a reference to it? you skip referenced
dquots in the isolation callback, so the only time it needs to be
removed from the LRU is on reclaim. And that means you only need an
atomic_dec_and_test() to determine if you need to add the dquot to
the LRU....

So what it appears to me is that you need to do is:

	a) separate the dq_lru_lock > dq_lock changes into a
	separate patch

	b) separate the object reference counting from the LRU
	operations

	c) make the LRU operations the innermost operations for
	locking purposes

	d) convert to list_lru operations...

Cheers,

Dave.
-- 
Dave Chinner
dchinner redhat com


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]