[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Cluster-devel] Re: [PATCH 1/2] NLM failover unlock commands



Neil Brown wrote:
On Tuesday January 15, wcheng redhat com wrote:
I don't feel comfortable to change the existing code structure, especially a BUG() statement. It would be better to separate lock failover function away from lockd code clean-up. This is to make it easier for problem isolations (just in case).

Fair enough.

On the other hand, if we view "ret" as a file count that tells us how many files fail to get unlocked, it would be great for debugging purpose. So the changes I made (currently in the middle of cluster testing) end up like this:

if (likely(is_failover_file == NULL) ||
is_failover_file(data, file)) {
/*
* Note that nlm_inspect_file updates f_locks
* and ret is the number of files that can't
* be unlocked.
*/
ret += nlm_inspect_file(data, file, match);
} else
file->f_locks = nlm_file_inuse(file);

Looks good.

 			mutex_lock(&nlm_file_mutex);
 			file->f_count--;
 			/* No more references to this file. Let go of it. */
-			if (list_empty(&file->f_blocks) && !file->f_locks
+			if (!file->f_locks && list_empty(&file->f_blocks)
Is this change actually achieving something?  or is it just noise?
Not really - but I thought checking for f_locks would be faster (tiny bit of optimization :))

You don't want to put the fastest operation first.  You want the one
that is most likely to fail (as it is an '&&' where failure aborts the
sequence).
So you would need some argument (preferably with measurements) to say
which of the conditions will fail most often, before rearranging as an
optimisation.


yep .. really think about it, my bad. Have reset it back ... Wendy


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]