[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Where to go with cman ?



At 10:32 AM 9/13/2005 +0100, you wrote:
>Guochun Shi wrote:
>> Patrick,
>> can you describe the steps changed for CVS version compared to those in usage.txt  in order to make gfs2 work? 
>> 
>
>Very briefly... More doc should be made available soon I hope.
>
>ccsd
>cman_tool join
>modprobe dlm.ko
>modprobe dlm_device.ko
>modprobe lock_harness.ko
>modprobe lock_dlm.ko
>modprobe gfs.ko
>modprobe sctp
>
>groupd
>dlm_controld
>lock_dlmd
>fenced
>fence_tool join
>
>Note that if you want to use clvmd you will need a patch to make it use libcman
>rather than calling directly into the (now non-existant) kernel cman. See attached.

thanks for the info, I  still cannot get it work in simple one node lock_dlm case.
It hanged when I tried to mount.
(but lock_nolock works for me)

I attached all steps I did, the cluster.conf file and the log from /var/log/messages.

thanks a lot
-Guochun

------------------------------------------------------------------------------------------------------------------
[root posic066 cman_tool]# mount -t configfs configfs /config 
[root posic066 cman_tool]# ccsd
[root posic066 cman_tool]# cman_tool join -N 1
command line options may override cluster.conf values
[root posic066 cman_tool]# modprobe dlm
[root posic066 cman_tool]# modprobe lock_dlml 
FATAL: Module lock_dlml not found.
[root posic066 cman_tool]# modprobe lock_dlm
[root posic066 cman_tool]# modprobe gfs
[root posic066 cman_tool]# modprobe sctp
[root posic066 cman_tool]# lsmod 
Module                  Size  Used by
sctp                  163164  2 [unsafe]
ipv6                  263904  7 sctp
gfs                   296708  0 
lock_dlm               23544  0 
lock_harness            5544  2 gfs,lock_dlm
dlm                   100036  1 lock_dlm
configfs               26892  2 dlm
nfs                   218856  2 
lockd                  66056  2 nfs
sunrpc                155964  3 nfs,lockd
autofs                 16384  0 
e100                   41476  0 
mii                     5888  1 e100
qla2300               124800  0 
qla2xxx               120792  1 qla2300
scsi_transport_fc      29184  1 qla2xxx
parport_pc             28612  0 
parport                37448  1 parport_pc
[root posic066 cman_tool]# groupd
[root posic066 cman_tool]# dlm_controld 
[root posic066 cman_tool]# lock_dlmd
[root posic066 cman_tool]# fenced
[root posic066 cman_tool]# fence_tool join
[root posic066 cman_tool]# gfs_mkfs -p lock_dlm -t alpha:testfs -j 1 /dev/sdb1
This will destroy any data on /dev/sdb1.
  It appears to contain a GFS filesystem.

Are you sure you want to proceed? [y/n] yes

Device:                    /dev/sdb1
Blocksize:                 4096
Filesystem Size:           1975184
Journals:                  1
Resource Groups:           32
Locking Protocol:          lock_dlm
Lock Table:                alpha:testfs

Syncing...
All Done
[root posic066 cman_tool]# mount -t gfs /dev/sdb1 /mnt

-----------------------------------------------------------------------------------------------------------------------------------------------



The cluster.conf file
--------------------------------------------------------------------------------------------------
<?xml version="1.0"?>
<cluster name="alpha" config_version="1">

<cman>
</cman>

<clusternodes>
<clusternode name="posic066">
        <fence>
                <method name="single">
                        <device name="human" nodename="posic066"/>
                </method>
        </fence>
</clusternode>

</clusternodes>

<fencedevices>
        <fencedevice name="human" agent="fence_manual"/>
</fencedevices>

</cluster>
--------------------------------------------------------------------------------------------------------------------------





Attachment: gfstest.log.gz
Description: Binary data


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]