[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Locks reported by gfs_tool lockdump does not match that presented in dlm_locks. Any reason??



--- Wendy Cheng <s wendy cheng gmail com> wrote:

> Ja S wrote:
> > Hi, All:
> >
> > For a given lock space, at the same time, I saved
> a
> > copy of the output of “gfs_tool lockdump?as
> > “gfs_locks?and a copy of dlm_locks. 
> >
> > Then I checked the locks presents in the two saved
> > files. I realized that the number of locks in
> > gfs_locks is not the same as the locks presented
> in
> > dlm_locks.
> >
> > For instance, 
> > >From dlm_locks:
> > 9980 NL locks, where
> > --7984 locks are from remote nodes
> > --0 locks are on remote nodes
> > --1996 locks are processed on its own master lock
> > resources
> > 0 CR locks, where
> > --0 locks are from remote nodes
> > --0 locks are on remote nodes
> > --0 locks are processed on its own master lock
> > resources
> > 0 CW locks, where
> > --0 locks are from remote nodes
> > --0 locks are on remote nodes
> > --0 locks are processed on its own master lock
> > resources
> > 1173 PR locks, where
> > --684 locks are from remote nodes
> > --32 locks are on remote nodes
> > --457 locks are processed on its own master lock
> > resources
> > 0 PW locks, where
> > --0 locks are from remote nodes
> > --0 locks are on remote nodes
> > --0 locks are processed on its own master lock
> > resources
> > 47 EX locks, where
> > --46 locks are from remote nodes
> > --0 locks are on remote nodes
> > --1 locks are processed on its own master lock
> > resources
> >
> > In summary, 
> > 11200 locks in total, where
> > -- 8714 locks are from remote nodes (entries with
> ?> > Remote: ?
> > -- 32 locks are on remote nodes (entries with ?> >
Master: ?
> > -- 2454 locks are processed on its own master lock
> > resources (entries with only lock ID and lock
> mode)
> >
> > These locks are all in the granted queue. There is
> > nothing under the conversion and waiting queues.
> > ======================================
> >
> > >From gfs_locks, there are 2932 locks in total, (
> grep
> > ‘^Glock ?and count the entries). Then for each
> Glock
> > I got the second number which is the ID of a lock
> > resource, and searched the ID in dlm_locks. I then
> > split the searched results into two groups as
> shown
> > below:
> > --46 locks are associated with local copies of
> master
> > lock resources on remote nodes
> > --2886 locks are associated with master lock
> resources
> > on the node itself
> >
> >
> > ======================================
> > Now, I tried to find the relationship between the
> five
> > numbers from two sources but ended up nowhere.
> > Dlm_locks:
> > -- 8714 locks are from remote nodes 
> > -- 32 locks are on remote nodes
> > -- 2454 locks are processed on its own master lock
> > resources 
> > Gfs_locks:
> > --46 locks are associated with local copies of
> master
> > lock resources on remote nodes
> > --2886 locks are associated with master lock
> resources
> > on the node itself
> >
> > Can anyone kindly point out the relationships
> between
> > the number of locks presented in dlm_locks and
> > gfs_locks?
> >
> >
> > Thanks for your time on reading this long question
> and
> > look forward to your help.
> >
> >   
> I doubt this will help anything from practical point
> of view.. 
> understanding how to run Oprofile and/or SystemTap
> will probably help 
> you more on the long run. However, if you want to
> know .. the following 
> are why they are different:
> 
> GFS locking is controlled by a subsysgtem called
> "glock". Glock is 
> designed to run and interact with *different*
> distributed lock managers; 
> e.g. in RHEL 3, other than DLM, it also works with
> another lock manager 
> called "GULM". Only active locks has an one-to-one
> correspondence with 
> the lock entities inside lock manager. If a glock is
> in UNLOCK state, 
> lock manager may or may not have the subject lock in
> its record - they 
> are subject to get purged depending on memory and/or
> resource pressure. 
> The other way around is also true. A lock may exist
> in lock manager's 
> database but it could have been removed from glock
> subsystem. Glock 
> itself doesn't know about cluster configuration so
> it relies on external 
> lock manager to do inter-node communication. On the
> other hand, it 
> carries some other functions such as data flushing
> to disk when glock is 
> demoted from exclusive (write) to shared (read).


Thanks for the explanation. It is very helpful.

Jas


> -- Wendy
> 
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
>
https://www.redhat.com/mailman/listinfo/linux-cluster
> 



      


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]