[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Cluster-devel] gfs_controld doesn't clean up tables properly



gfs_controld debugging:

root node1:~# gfs_controld -P -D
1193643134 listen 3
1193643134 cpg 6
1193643134 groupd 8
1193643134 uevent 9
1193643134 plocks 12
1193643134 plock cpg message size: 336 bytes
1193643134 setup done
1193643157 client 6: join /mnt/gfs gfs lock_dlm gutsy:gfs rw /dev/etherd/e0.0
1193643157 mount: /mnt/gfs gfs lock_dlm gutsy:gfs rw /dev/etherd/e0.0
1193643157 gfs cluster name matches: gutsy
1193643157 gfs do_mount: rv 0
1193643157 groupd cb: set_id gfs 10001
1193643157 groupd cb: start gfs type 2 count 1 members 1
1193643157 gfs start 3 init 1 type 2 member_count 1
1193643157 gfs add member 1
1193643157 gfs total members 1 master_nodeid -1 prev -1
1193643157 gfs start_first_mounter
1193643157 gfs start_done 3
1193643157 notify_mount_client: nodir not found for lockspace gfs
1193643157 notify_mount_client: cmanconf_free_conf
1193643157 notify_mount_client: cman_finish
1193643157 notify_mount_client: hostdata=jid=0:id=65537:first=1
1193643157 groupd cb: finish gfs
1193643157 gfs finish 3 needs_recovery 0
1193643157 gfs set /sys/fs/gfs/gutsy:gfs/lock_module/block to 0
1193643157 gfs set open /sys/fs/gfs/gutsy:gfs/lock_module/block error -1 2
1193643157 kernel: add@ gutsy:gfs
1193643157 gfs ping_kernel_mount 0
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs kernel_recovery_done_first first_done 0
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs kernel_recovery_done_first first_done 0
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs kernel_recovery_done_first first_done 0
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs kernel_recovery_done_first first_done 0
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs kernel_recovery_done_first first_done 0
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs kernel_recovery_done_first first_done 1
1193643158 kernel: change@ gutsy:gfs
1193643158 gfs recovery_done jid 5 ignored, first 1,1
1193643158 gfs receive_recovery_done from 1 needs_recovery 0
1193643158 gfs set /sys/fs/gfs/gutsy:gfs/lock_module/block to 0
1193643158 client 6: mount_result /mnt/gfs gfs 0
1193643158 gfs got_mount_result: ci 6 result 0 another 0 first_mounter 1 opts 9
1193643158 gfs send_mount_status kernel_mount_error 0 first_mounter 1
1193643158 client 6 fd 13 dead
1193643158 client 6 fd -1 dead
1193643158 gfs receive_mount_status from 1 len 288 last_cb 3
1193643158 gfs _receive_mount_status from 1 kernel_mount_error 0 first_mounter 1
opts 9
1193643284 kernel: remove@ gutsy:gfs
1193643284 gfs get open /sys/fs/gfs/gutsy:gfs/lock_module/id error -1 2
1193643284 gfs ping_kernel_mount -1
1193643331 client 6: join /mnt/gfs gfs lock_dlm gutsy:gfs rw /dev/etherd/e0.0
1193643331 mount: /mnt/gfs gfs lock_dlm gutsy:gfs rw /dev/etherd/e0.0
1193643331 gfs add_another_mountpoint dir /mnt/gfs dev /dev/etherd/e0.0 ci 6
1193643331 mount point /mnt/gfs already used
1193643331 gfs do_mount: rv -16
1193643331 client 6 fd 13 dead
1193643331 client 6 fd -1 dead

David, just to avoid confusion, I discovered this problem while testing the
noccs branch (hence the strange log entries) but I can reproduce it in the exact
same way with a clean CVS checkout from HEAD.

Fabio

-- 
I'm going to make him an offer he can't refuse.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]