[Linux-cluster] re-post
AKIN ÿffffffffffd6ZTOPUZ
akinoztopuz at yahoo.com
Tue Jul 17 11:12:47 UTC 2012
thanks Emmanuel
ı will try and let you know
________________________________
From: emmanuel segura <emi2fast at gmail.com>
To: AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz at yahoo.com>
Sent: Tuesday, July 17, 2012 2:09 PM
Subject: Re: [Linux-cluster] re-post
yes with _netdev option
2012/7/17 AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz at yahoo.com>
you mean that , shared file systems will be in each nodes fstab instead of adding cluster services?
>
>
> From: emmanuel segura <emi2fast at gmail.com>
>To: AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz at yahoo.com>
>Sent: Tuesday, July 17, 2012 1:26 PM
>Subject: Re: [Linux-cluster] re-post
>
>
>When i use gfs a i use /etc/fstab for mount
>
>
>2012/7/17 AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz at yahoo.com>
>
>could you please make some clerificaiton about your comments for my understanding?
>>
>>gfs for clustered file system
>>there are two services including oracle and sap mount points.
>>
>>
>> From: emmanuel segura <emi2fast at gmail.com>
>>To: AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz at yahoo.com>; linux clustering <linux-cluster at redhat.com>
>>Sent: Tuesday, July 17, 2012 10:40 AM
>>Subject: Re: [Linux-cluster] re-post
>>
>>
>>If you have a failover service, why do you use gfs2?
>>
>>
>>2012/7/17 AKIN ÿffffffffffd6ZTOPUZ <akinoztopuz at yahoo.com>
>>
>> Hello
>>>
>>>I am sending my post again :are there anybody came accross with this issue before?
>>>
>>>
>>> I have 2 nodes clsuter without quorum disk.
>>>I saw a problem when I moved to services to other
node.
>>>disk loyout is iscsi .
>>>I thınk problem is about gfs.
>>>when I stop service in node1 and related file
systems(included in service) are unmounted from that node and I want to mount it
on node2 manually , I am takıng a message about resource busy.
>>>[root clsn2 ~]# mount -t gfs2
/dev/mapper/SAPClusterVG_d7-SAPClusterLV_d7
/usr/sap/PRO/ASCS01
>>>/sbin/mount.gfs2:
/dev/mapper/SAPClusterVG_d7-SAPClusterLV_d7 already mounted or
/usr/sap/PRO/ASCS01 busy
>>>
>>>Could you have any ideas?
>>>cluster.conf is at the below:
>>>?xml version="1.0"?>
>>><cluster
alias="testsapcluster" config_version="197" name="testsapcluster">
>>><fence_daemon clean_start="0" post_fail_delay="0"
post_join_delay="3"/>
>>><clusternodes>
>>><clusternode name="clsn1.eda.com" nodeid="1"
votes="1">
>>><fence>
>>><method
name="1">
>>><device
name="fence_node1"/>
>>></method>
>>></fence>
>>></clusternode>
>>><clusternode name="clsn2.eda.com"
nodeid="2" votes="1">
>>><fence>
>>><method
name="1">
>>><device
name="fence_node2"/>
>>></method>
>>></fence>
>>></clusternode>
>>></clusternodes>
>>><cman
expected_votes="1" two_node="1"/>
>>><fencedevices>
>>><fencedevice agent="fence_ilo"
hostname="iloclsnode1" login="clsfenceadmin" name="ClsNode1Fence"
passwd="***********"/>
>>><fencedevice agent="fence_ilo"
hostname="iloclsnode2" login="clsfenceadmin" name="ClsNode2Fence"
passwd="************"/>
>>><fencedevice
agent="fence_ipmilan" ipaddr="192.168.11.68" login="clsfenceadmin"
name="IPMI-Node1" passwd="**********"/>
>>><fencedevice
agent="fence_ipmilan" ipaddr="192.168.11.67" login="clsfenceadmin"
name="IPMI-Node2" passwd="**********"/>
>>><fencedevice
agent="fence_ipmilan" ipaddr="10.34.1.68" login="clsfenceadmin" name="IPMI_1"
passwd="********"/>
>>><fencedevice agent="fence_ipmilan"
ipaddr="10.34.1.67" login="clsfenceadmin" name="IPMI_2"
passwd="********"/>
>>><fencedevice agent="fence_ipmilan"
ipaddr="192.168.11.68" lanplus="1" login="clsfenceadmin" method="cycle"
name="fence_node1" passwd="*******" power_wait="4"/>
>>><fencedevice agent="fence_ipmilan" ipaddr="192.168.11.67" lanplus="1"
login="clsfenceadmin" method="cycle" name="fence_node2" passwd="*******"
power_wait="4"/>
>>></fencedevices>
>>><rm
log_level="7">
>>><failoverdomains>
>>><failoverdomain
name="sapfailover" nofailback="0" ordered="1"
restricted="0">
>>><failoverdomainnode
name="clsn1.eda.com" priority="1"/>
>>><failoverdomainnode name="clsn2.eda.com"
priority="1"/>
>>></failoverdomain>
>>></failoverdomains>
>>><resources>
>>><ip address="10.34.1.111"
monitor_link="1"/>
>>><ip address="10.34.1.246"
monitor_link="0"/>
>>><ip address="10.34.1.247"
monitor_link="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_d7-SAPClusterLV_d7" force_unmount="1"
fsid="1689" fstype="gfs2" mountpoint="/usr/sap/PRO/ASCS01"
name="/usr/sap/PRO/ASCS01" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_b2-SAPClusterLV_b2"
force_unmount="1" fsid="52296" fstype="gfs2" mountpoint="/oracle" name="/oracle"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_b3-SAPClusterLV_b3" force_unmount="1"
fsid="25486" fstype="gfs2" mountpoint="/oracle/client" name="/oracle/client"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_b5-SAPClusterLV_b5" force_unmount="1"
fsid="5895" fstype="gfs2" mountpoint="/oracle/stage" name="/oracle/stage"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_b6-SAPClusterLV_b6" force_unmount="1"
fsid="19741" fstype="gfs2" mountpoint="/oracle/PRO" name="/oracle/PRO"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_b7-SAPClusterLV_b7" force_unmount="1"
fsid="6452" fstype="gfs2" mountpoint="/oracle/PRO/112_64"
name="/oracle/PRO/112_64" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_b8-SAPClusterLV_b8"
force_unmount="1" fsid="40841" fstype="gfs2" mountpoint="/oracle/PRO/origlogA"
name="/oracle/PRO/origlogA" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_b9-SAPClusterLV_b9"
force_unmount="1" fsid="52787" fstype="gfs2" mountpoint="/oracle/PRO/origlogB"
name="/oracle/PRO/origlogB" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c5-SAPClusterLV_c5"
force_unmount="1" fsid="22219" fstype="gfs2" mountpoint="/oracle/PRO/sapdata1"
name="/oracle/PRO/sapdata1" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_b10-SAPClusterLV_b10"
force_unmount="1" fsid="47722" fstype="gfs2" mountpoint="/oracle/PRO/mirrlogA"
name="/oracle/PRO/mirrlogA" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c6-SAPClusterLV_c6"
force_unmount="1" fsid="1905" fstype="gfs2" mountpoint="/oracle/PRO/sapdata2"
name="/oracle/PRO/sapdata2" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c1-SAPClusterLV_c1"
force_unmount="1" fsid="60368" fstype="gfs2" mountpoint="/oracle/PRO/mirrlogB"
name="/oracle/PRO/mirrlogB" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c7-SAPClusterLV_c7"
force_unmount="1" fsid="14311" fstype="gfs2" mountpoint="/oracle/PRO/sapdata3"
name="/oracle/PRO/sapdata3" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c2-SAPClusterLV_c2"
force_unmount="1" fsid="8037" fstype="gfs2" mountpoint="/oracle/PRO/oraarch"
name="/oracle/PRO/oraarch" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c8-SAPClusterLV_c8"
force_unmount="1" fsid="41540" fstype="gfs2" mountpoint="/oracle/PRO/sapdata4"
name="/oracle/PRO/sapdata4" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c3-SAPClusterLV_c3"
force_unmount="1" fsid="23164" fstype="gfs2" mountpoint="/oracle/PRO/sapreorg"
name="/oracle/PRO/sapreorg" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_c9-SAPClusterLV_c9"
force_unmount="1" fsid="37586" fstype="gfs2" mountpoint="/oracle/PRO/sapdata5"
name="/oracle/PRO/sapdata5" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_d1-SAPClusterLV_d1"
force_unmount="1" fsid="61050" fstype="gfs2" mountpoint="/software"
name="/software" self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_d2-SAPClusterLV_d2" force_unmount="1"
fsid="45919" fstype="gfs2" mountpoint="/saptmp" name="/saptmp"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_d3-SAPClusterLV_d3" force_unmount="1"
fsid="56812" fstype="gfs2" mountpoint="/usr/sap/PRO" name="/usr/sap/PRO"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_d5-SAPClusterLV_d5" force_unmount="1"
fsid="47829" fstype="gfs2" mountpoint="/usr/sap/DAA" name="/usr/sap/DAA"
self_fence="0"/>
>>><clusterfs
device="/dev/mapper/SAPClusterVG_d6-SAPClusterLV_d6" force_unmount="1"
fsid="1394" fstype="gfs2" mountpoint="/usr/sap/hostctrl"
name="/usr/sap/hostctrl" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_d8-SAPClusterLV_d8"
force_unmount="1" fsid="33058" fstype="gfs2" mountpoint="/usr/sap/PRO/DVEBMGS00"
name="/usr/sap/PRO/DVEBMGS00" self_fence="0"/>
>>><clusterfs device="/dev/mapper/SAPClusterVG_b1-SAPClusterLV_b1"
force_unmount="0" fsid="1822" fstype="gfs2" mountpoint="/sapmnt/PRO"
name="/sapmnt/PRO" self_fence="0"/>
>>><SAPInstance DIR_EXECUTABLE="/usr/sap/PRO/ASCS01/exe"
DIR_PROFILE="/usr/sap/PRO/SYS/profile" InstanceName="PRO_ASCS01_sapproascs"
START_PROFILE="START_ASCS01_sapproascs"/>
>>><SAPDatabase DBTYPE="ORA" DIR_EXECUTABLE="/usr/sap/PRO/ASCS01/exe"
NETSERVICENAME="LISTENER" SID="PRO"/>
>>></resources>
>>><service autostart="0"
domain="sapfailover" exclusive="0" name="DB">
>>><clusterfs fstype="gfs" ref="/oracle/PRO"/>
>>><clusterfs fstype="gfs"
ref="/oracle/PRO/112_64"/>
>>><clusterfs
fstype="gfs" ref="/oracle/PRO/origlogA"/>
>>><clusterfs fstype="gfs"
ref="/oracle/PRO/origlogB"/>
>>><clusterfs
fstype="gfs" ref="/oracle/PRO/mirrlogA"/>
>>><clusterfs fstype="gfs"
ref="/oracle/PRO/mirrlogB"/>
>>><clusterfs
fstype="gfs" ref="/oracle/PRO/oraarch"/>
>>><clusterfs fstype="gfs"
ref="/oracle/PRO/sapreorg"/>
>>><clusterfs
fstype="gfs" ref="/oracle/PRO/sapdata1"/>
>>><clusterfs fstype="gfs"
ref="/oracle/PRO/sapdata2"/>
>>><clusterfs
fstype="gfs" ref="/oracle/PRO/sapdata3"/>
>>><clusterfs fstype="gfs"
ref="/oracle/PRO/sapdata4"/>
>>><clusterfs
fstype="gfs" ref="/oracle/PRO/sapdata5"/>
>>><ip
ref="10.34.1.247"/>
>>></service>
>>><service autostart="0" domain="sapfailover" exclusive="1"
name="sap">
>>><ip
ref="10.34.1.246"/>
>>><clusterfs
ref="/usr/sap/PRO/ASCS01"/>
>>></service>
>>></rm>
>>></cluster>
>>>--
>>>Linux-cluster mailing list
>>>Linux-cluster at redhat.com
>>>https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>>
>>--
>>esta es mi vida e me la vivo hasta que dios quiera
>>
>>
>>
>
>
>--
>esta es mi vida e me la vivo hasta que dios quiera
>
>
>
--
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120717/d637e050/attachment.htm>
More information about the Linux-cluster
mailing list