[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Cluster-devel] [PATCH] gfs2: skip dlm_unlock calls in unmount


On Fri, 2012-11-09 at 10:30 -0500, David Teigland wrote:
> On Fri, Nov 09, 2012 at 09:45:17AM +0000, Steven Whitehouse wrote:
> > > +	if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
> > > +	    (!gl->gl_lvb[0] || gl->gl_state != LM_ST_EXCLUSIVE)) {
> > I'm still not happy with using !gl->gl_lvb[0] to determine whether the
> > LVB is in use or not. I think we need a better test, or alternatively
> > just test the lock state, since most locks will be NL anyway before they
> > get to this point in time,
> Yeah, a glock flag indicating the lvb is used would be best, I'll just
> test the lock state.
> This actually brings up another improvement you could make.  Right now
> gfs2 enables the lvb on all locks, even though it only uses it on a small
> minority.  Limiting the lvb to locks that need it would:
> - save 64 bytes of memory for every local lock
>   (32 in gfs2_glock, 32 in dlm_rsb)
> - save 96 bytes of memory for every remote lock
>   (32 in gfs2_glock, 32 in local dlm_rsb, 32 in remote dlm_lkb)
> - save 32 bytes of network message size in many dlm messages
> - save a lot of memcpying of zeros
> - save some recovery time

Yes, although we did consider what the best thing to do was back at the
start of GFS2 development wrt LVBs. The actual overhead didn't seem too
much really. The previous implementation had the LVB hanging off the
glock from a pointer, so on 64 bit, that pointer alone was 8 bytes. We
also saved another 4 bytes (plus a further 4 for alignment) by not
requiring the atomic counter. So it seemed not unreasonable to just
inline the LVB into the glock.

Another for having it on all glocks was that if we did want to start
making use of it on different glock types in the future, we could do so
without having to worry about whether its value would be preserved or
not. Also, it removed some tests from the fast path of acquiring and
dropping locks.

Trying to reduce the size of the lock requests makes sense if that is
becoming a limiting factor in performance (is it? I'm not sure) so maybe
we should revisit this.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]