[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] RHCS simple ip-failover problem



Glad you are working. You may have all these in place now but you'll
need the following services (and they should start in this order):

cman
clvmd - If using the cluster aware Logical Volume manager
gfs2 - Unless you purely demand mount GFS2 or don't use GFS2 at all.
ricci - To distribute cluster configuration files between nodes
rgmanager - Resource manager
lucci - If you want the web configuration interface (only should be on
one node)

Though ricci and rgmanager aren't dependent on each other, so could be
flipped around in the above order.

Thanks

Colin


On Sun, 2011-11-27 at 16:11 +0000, Chris Kwall wrote:
> I didn't found at Cluster Administration Documentation
> (docs.redhat.com) that it's necessarily to start rgmanager separated.
>
> now the Service up and running.
>
>
>  Service Name                   Owner (Last)                   State
>
>  ------- ----                   ----- ------                   -----
>  service:webprod                vbox1.example.local            started
>
>
> Thank you very much Colin.
>
>
>
>         ______________________________________________________________
>         Von: Colin Simpson <Colin Simpson iongeo com>
>         An: Chris Kwall <christiankwall-qsa yahoo com>; linux
>         clustering <linux-cluster redhat com>
>         Gesendet: 15:24 Sonntag, 27.November 2011
>         Betreff: RE: [Linux-cluster] RHCS simple ip-failover problem
>
>         Maybe I'm missing something but it just looks like the
>         "rgmanager" service isn't started?
>
>         Colin
>
>
>
>         ______________________________________________________________
>         From: linux-cluster-bounces redhat com
>         [linux-cluster-bounces redhat com] on behalf of Chris Kwall
>         [christiankwall-qsa yahoo com]
>         Sent: 27 November 2011 12:52
>         To: linux-cluster redhat com
>         Subject: [Linux-cluster] RHCS simple ip-failover problem
>
>
>         Dear List,
>
>
>         I've received today a evaluation License for RHEL 6.1[Red Hat
>         Enterprise Linux Server release 6.1 (Santiago)] for learning
>         purposes with the RHCS.
>
>
>         So I've setup 2 machines vbox1 and vbox2, with 2 Interfaces
>         (Intranet + Hearbeat) and created a cluster.conf with a simple
>         IP-fail-over Scenario.
>         Starting the cluster no node takes the IP.
>         After checking cluster state I'm a little bit confused, it
>         doesn't list the service.
>
>         Maybe i understood something wrong from the documentation.
>
>
>         [root vbox1 cluster]# clustat  -l
>         Cluster Status for vbox @ Sun Nov 27 14:38:00 2011
>         Member Status: Quorate
>
>
>          Member Name                             ID   Status
>          ------ ----                             ---- ------
>          vbox1.example.local                         1 Online, Local
>          vbox2.example.local                         2 Online
>
>
>         By testing the rules the system takes the ip-address.
>         [root vbox1 cluster]# rg_test test /etc/cluster/cluster.conf
>         start service webprod
>         Running in test mode.
>         ..
>         Starting webprod...
>
>         <debug>  Link for eth0: Detected
>         Link for eth0: Detected
>         <info>   Adding IPv4 address 192.168.99.100/24 to eth0
>         Adding IPv4 address 192.168.99.100/24 to eth0
>         <debug>  Pinging addr 192.168.99.100 from dev eth0
>         Pinging addr 192.168.99.100 from dev eth0
>         <debug>  Sending gratuitous ARP: 192.168.99.100
>         00:0c:29:00:d1:05 brd ff:ff:ff:ff:ff:ff
>         Sending gratuitous ARP: 192.168.99.100 00:0c:29:00:d1:05 brd
>         ff:ff:ff:ff:ff:ff
>         rdisc: no process killed
>         Start of webprod complete
>
>
>         [root vbox1 cluster]# ip addr list eth0
>         2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>         pfifo_fast state UP qlen 1000
>             link/ether 00:0c:29:00:d1:05 brd ff:ff:ff:ff:ff:ff
>             inet 192.168.99.11/24 brd 192.168.99.255 scope global eth0
>             inet 192.168.99.100/24 scope global secondary eth0
>             inet6 fe80::20c:29ff:fe00:d105/64 scope link
>                valid_lft forever preferred_lft forever
>
>
>         maybe someone give me a point to the right direction?
>
>
>         /etc/hosts
>         # Intranet
>         192.168.99.11 vbox1.example.local vbox1
>         192.168.99.12 vbox2.example.local vbox2
>         # Heartbeat
>         192.168.1.11 h-vbox1.example.local h-vbox1
>         192.168.1.12 h-vbox2.example.local h-vbox2
>         # Service-IP
>         192.168.99.100 vbox.example.local vbox
>
>
>         /etc/cluster/cluster.conf (for testing purposes only manual
>         fencing. later i'l try it with fence_ipmilan)
>
>
>         <?xml version="1.0"?>
>         <cluster config_version="3" name="vbox">
>         <cman expected_votes="1" two_node="1"/>
>         <clusternodes>
>         <clusternode name="vbox1.example.local" nodeid="1">
>         <altname name="h-vbox1.example.local"/>
>         <fence>
>         <method name="n1">
>         <device name="human" nodename="vbox1.example.local"/>
>         </method>
>         </fence>
>         </clusternode>
>         <clusternode name="vbox2.example.local" nodeid="2">
>         <altname name="h-vbox2.example.local"/>
>         <fence>
>         <method name="n2">
>         <device name="human" nodename="vbox2.example.local"/>
>         </method>
>         </fence>
>         </clusternode>
>         </clusternodes>
>         <fencedevices>
>         <fencedevice agent="fence_manual" name="human"/>
>         </fencedevices>
>         <rm>
>         <resources>
>         <ip address="192.168.99.100" monitor_link="on"
>         sleeptime="15"/>
>         </resources>
>         <service autostart="1" domain="web" exclusive="0"
>         name="webprod" recovery="restart">
>         <ip ref="192.168.99.100"/>
>         </service>
>         <failoverdomains>
>         <failoverdomain name="web" nofailback="0" ordered="1"
>         restricted="0">
>         <failoverdomainnode name="vbox1.example.local" priority="1"/>
>         <failoverdomainnode name="vbox2.example.local" priority="2"/>
>         </failoverdomain>
>         </failoverdomains>
>         </rm>
>         </cluster>
>
>
>         Thanks in advance
>         Chris
>
>
>         ______________________________________________________________
>
>
>         This email and any files transmitted with it are confidential
>         and are intended solely for the use of the individual or
>         entity to whom they are addressed. If you are not the original
>         recipient or the person responsible for delivering the email
>         to the intended recipient, be advised that you have received
>         this email in error, and that any use, dissemination,
>         forwarding, printing, or copying of this email is strictly
>         prohibited. If you received this email in error, please
>         immediately notify the sender and delete the original.
>
>
>
>
>


________________________________


This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]