[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] updatedb lockup



Hi,

on my test cluster (running cluster suite 1.03 as packaged in Debian
Etch), two of the three nodes are locked up in the updatedb daily
cronjob.  I noticed this when ls commands on the GFS blocked.  The
blocked (D state) processes thus are the following:

CMD                         WCHAN
[gfs_glockd]                gfs_glockd
/usr/bin/find / -ignore_rea glock_wait_internal
ls --color=auto             glock_wait_internal

CMD                         WCHAN
/usr/bin/find / -ignore_rea glock_wait_internal
ls --color=auto /mnt        glock_wait_internal

$ /sbin/cman_tool status
Protocol version: 5.0.1
Config version: 1
Cluster name: pilot
Cluster ID: 3402
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 3
Expected_votes: 3
Total_votes: 3
Quorum: 2   
Active subsystems: 6
Node name: YYY
Node ID: 1
Node addresses: XXX.XXX.XXX.XXX

$ /sbin/cman_tool services
Service          Name                              GID LID State     Code
Fence Domain:    "default"                           2   3 run       -
[3 1 2]

DLM Lock Space:  "clvmd"                             1   1 run       -
[3 1 2]

DLM Lock Space:  "test"                             13   6 run       -
[2 3 1]

GFS Mount Group: "test"                             14   7 run       -
[2 3 1]

Apart from the node data, the "test" LIDs and the "default" and
"clvmd" permutations are different on the two nodes.  I didn't try to
lock up the third node by touching the GFS there.

Can it be some misconfiguration on my side?  If it's a bug, and this
situation is usable for the developers (for debugging), I can leave it
alone and provide any requested data.  Otherwise I reset the machines
and go on with my experiments.
-- 
Regards,
Feri.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]