[Linux-cluster] clusvcadm : Could not connect to resource group manager

PARAM KRISH mkparam at gmail.com
Fri Aug 24 10:43:51 UTC 2012


Hi, Thanks for the help. I hope we are nearing to the problem.

I enabled logging , this is how my cluster.conf looks like

<?xml version="1.0"?>
<cluster alias="newCluster" config_version="16" name="newCluster">
<logging debug="on"/>
<cman expected_votes="1" two_node="1"/>
<clusternodes>
<clusternode name="server1" nodeid="1" votes="1">
<fence><method name="single"><device
name="human"/></method></fence></clusternode><clusternode name="server2"
nodeid="2" votes="1"><fence><method name="single"><device
name="human"/></method></fence></clusternode></clusternodes><fencedevices>

        </fencedevices><rm><failoverdomains><failoverdomain name="failOver"
nofailback="0" ordered="1" restricted="0"><failoverdomainnode
name="server1" priority="1"/><failoverdomainnode name="server2"
priority="2"/></failoverdomain></failoverdomains><resources><ip
address="192.168.61.130" monitor_link="1"/><apache
config_file="conf/httpd.conf" name="httpd" server_root="/etc/httpd"
shutdown_wait="0"/></resources><service autostart="1" domain="failOver"
exclusive="1" name="Apache" recovery="relocate"><ip
address="192.168.61.130" monitor_link="1"><apache
config_file="conf/httpd.conf" name="Apache" server_root="/etc/httpd"
shutdown_wait="0"/></ip></service><service autostart="1" domain="failOver"
exclusive="1" name="website" recovery="relocate"><ip
ref="192.168.61.130"><apache ref="httpd"/></ip></service></rm><fence_daemon
clean_start="1" post_fail_delay="0" post_join_delay="3"/><logging
debug="on"/></cluster>

There is no logging happening in /var/run/cluster/

[root at server1 ~]# ls /var/run/cluster/
apache  ccsd.pid  ccsd.sock  rgmanager.sk

I started resource manager in foreground and it says like ..

failed acquiring lockspace: No such device
Locks not working!

What next i could do ?

-Param

On Fri, Aug 24, 2012 at 3:18 PM, emmanuel segura <emi2fast at gmail.com> wrote:

> /etc/init.d/rgmanager start or service rgmanager start
>
>
> 2012/8/24 Heiko Nardmann <heiko.nardmann at itechnical.de>
>
>> It is strange that strace shows that /var/run/cluster/rgmanager.sk is
>> missing.
>>
>> Normally it is helpful to see the complete cluster.conf. Could you
>> provide that one?
>>
>> Also of interest is /var/log/cluster/rgmanager.log - do you have debug
>> enabled inside cluster.conf?
>>
>> Maybe it is possible to start rgmanager in the foreground (-f) with
>> strace? That might also be a way to show why the rgmanager.sk is missing
>> ...
>>
>> Just some ideas ...
>>
>>
>> Kind regards,
>>
>>     Heiko
>>
>> Am 24.08.2012 11:04, schrieb PARAM KRISH:
>>
>>> All,
>>>
>>> I am trying to setup a simple two node cluster in my laptop using two
>>> RHEL VM's.
>>>
>>> Everything looks just fine to me but i am unable to enable a apache
>>> service though it works beautifully when tried with "rg_test test" on both
>>> the nodes.
>>>
>>> What could be the problem ? Please help. I am a novice in red hat
>>> cluster but learnt a bit of it in the last few days while trying to fix all
>>> the problems encountered.
>>>
>>> Here are the details.
>>>
>>> [root at server1 ~]# clustat
>>> Cluster Status for newCluster @ Thu Aug 23 00:29:32 2012
>>> Member Status: Quorate
>>>
>>>  Member Name                 ID   Status
>>>  ------ ----                 ---- ------
>>>  server1                     1 Online, Local
>>>  server2                     2 Online
>>>
>>> [root at server1 ~]# clustat -x
>>> <?xml version="1.0"?>
>>> <clustat version="4.1.1">
>>>   <cluster name="newCluster" id="43188" generation="250536"/>
>>>   <quorum quorate="1" groupmember="0"/>
>>>   <nodes>
>>>     <node name="server1" state="1" local="1" estranged="0" rgmanager="0"
>>> rgmanager_master="0" qdisk="0" nodeid="0x00000001"/>
>>>     <node name="server2" state="1" local="0" estranged="0" rgmanager="0"
>>> rgmanager_master="0" qdisk="0" nodeid="0x00000002"/>  </nodes>
>>> </clustat>
>>>
>>> [root at server2 ~]# clustat
>>> Cluster Status for newCluster @ Thu Aug 23 03:13:34 2012
>>> Member Status: Quorate
>>>
>>>  Member Name                 ID   Status
>>>  ------ ----                 ---- ------
>>>  server1                     1 Online
>>>  server2                     2 Online, Local
>>>
>>> [root at server2 ~]# clustat -x
>>> <?xml version="1.0"?>
>>> <clustat version="4.1.1">
>>>   <cluster name="newCluster" id="43188" generation="250536"/>
>>>   <quorum quorate="1" groupmember="0"/>
>>>   <nodes>
>>>     <node name="server1" state="1" local="0" estranged="0" rgmanager="0"
>>> rgmanager_master="0" qdisk="0" nodeid="0x00000001"/>
>>>     <node name="server2" state="1" local="1" estranged="0" rgmanager="0"
>>> rgmanager_master="0" qdisk="0" nodeid="0x00000002"/>
>>>   </nodes>
>>> </clustat>
>>>
>>>
>>> [root at server2 ~]# clusvcadm -e Apache
>>> Local machine trying to enable service:Apache...Could not connect to
>>> resource group manager
>>>
>>> strace cluvcsadm -e Apache
>>> ...
>>> stat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 4), ...}) = 0
>>> mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
>>> 0) = 0xb7fb5000
>>> write(1, "Local machine trying to enable s"..., 48Local machine trying
>>> to enable service:Apache...) = 48
>>> socket(PF_FILE, SOCK_STREAM, 0)         = 5
>>> connect(5, {sa_family=AF_FILE, path="/var/run/cluster/rgmanag**er.sk<http://rgmanager.sk><
>>> http://rgmanager.sk>"...}, 110) = -1 ENOENT (No such file or directory)
>>>
>>> close(5)                                = 0
>>> write(1, "Could not connect to resource gr"..., 44Could not connect to
>>> resource group manager
>>> ) = 44
>>> exit_group(1)                           = ?
>>>
>>>
>>> [root at server1 ~]# hostname
>>> server1.localdomain
>>>
>>> [root at server1 ~]# cat /etc/hosts
>>> # Do not remove the following line, or various programs
>>> # that require network functionality will fail.
>>> #127.0.0.1              server1.localdomain server1
>>> localhost.localdomain localhost
>>> 192.168.61.132 server1.localdomain server1
>>> 192.168.61.133 server2.localdomain server2
>>> ::1             localhost6.localdomain6 localhost6
>>>
>>>
>>> Package versions :
>>> luci-0.12.2-24.el5
>>> ricci-0.12.2-24.el5
>>> rgmanager-2.0.52-9.el5
>>> modcluster-0.12.1-2.el5
>>> cluster-cim-0.12.1-2.el5
>>> system-config-cluster-1.0.57-7
>>> lvm2-cluster-2.02.74-3.el5
>>> cluster-snmp-0.12.1-2.el5
>>>
>>> [root at server1 log]# cman_tool status
>>> Version: 6.2.0
>>> Config Version: 15
>>> Cluster Name: newCluster
>>> Cluster Id: 43188
>>> Cluster Member: Yes
>>> Cluster Generation: 250536
>>> Membership state: Cluster-Member
>>> Nodes: 2
>>> Expected votes: 1
>>> Total votes: 2
>>> Quorum: 1
>>> Active subsystems: 2
>>> Flags: 2node
>>> Ports Bound: 0
>>> Node name: server1
>>> Node ID: 1
>>> Multicast addresses: 239.192.168.93
>>> Node addresses: 192.168.61.132
>>>
>>> Redhat :Red Hat Enterprise Linux Server release 5.6
>>> (Tikanga)2.6.18-238.el5xen
>>>
>>> [root at server1 log]# service rgmanager status
>>> clurgmgrd (pid  9775) is running...
>>>
>>> [root at server1 log]# netstat -na | grep 11111
>>> tcp        0      0 0.0.0.0:11111 <http://0.0.0.0:11111>
>>> 0.0.0.0:*                   LISTEN
>>>
>>>
>>> Please let me know if you can help. One thing i noticed was that in the
>>> "clustat" it does not show "rgmanager" against both the nodes but i see the
>>> service is just running fine.
>>>
>>> *Note : No iptables, no SELinux enabled.*
>>> *
>>>
>>> *
>>> Hope i have given all the details required to help me quickly. Thanks.
>>>
>>> -Param
>>>
>>>
>>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/**mailman/listinfo/linux-cluster<https://www.redhat.com/mailman/listinfo/linux-cluster>
>>
>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120824/21d61573/attachment.htm>


More information about the Linux-cluster mailing list