[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Ethernet Channel Bonding Configuration Clarification is Needed



Hello Balaji,

Before looking into Cluter setup, is th Channel ethernet bonding working OK?

1) have you set the alias entry for bond0 interface in /etc/modprobe.conf ??
[root web2 ~]# cat /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=1 miimon=100 use_carrier=0
#

2) is the bonding module loaded?
[root web2 ~]# lsmod | grep -i bonding
bonding                72252  0
[root web2 ~]#

3) if not then load the module
 #modprobe bonding
#lsmod | grep -i bonding
#make sure the channel bonding is module is loaded.

4) ifdown bond0 OR #ifdown eth0

5) below are my server Ifcfg config files, compare your config files with them.

[root web2 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
SLAVE=yes
MASTER=bond0
HWADDR=00:1C:C4:BE:8C:70
[root web2 network-scripts]#

[root web2 network-scripts]# cat ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
SLAVE=yes
MASTER=bond0
HWADDR=00:1C:C4:BE:8C:7E
[root web2 network-scripts]#

[root web2 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
IPADDR=10.10.1.91
NETMASK=255.255.255.128
ONBOOT=yes
[root web2 network-scripts]#

6) restart network service
#service network restart

7) verify bond0 interface works fine...

#ip addr list  ----> look for bond0, eth0 and eth1 interfaces

[root web2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1c:c4:be:8c:70

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1c:c4:be:8c:7e
[root web2 network-scripts]#

Regards,
-Stevan Colaco




On Tue, Sep 16, 2008 at 9:07 AM, Neependra Khare <nkhare lists gmail com> wrote:
> Hello Balaji,
>>
>>
>> After i am rebooted both the server then cluster node becomes simplex and
>> Services are started on both the nodes
>> The cluster output in primary node
>>
>> Member Status: Quorate
>>
>> Member Name                              Status
>> -----------                             ---------
>> primary                                 Online, Local, rgmanager
>> secondary                               Offline
>>
>> Service Name         Owner (Last)                   State
>> ------------         ------------                  --------
>> Service              primary                        started
>>
>> The cluster output in secondary node
>>
>> Member Status: Quorate
>>
>> Member Name                              Status
>> -----------                             ---------
>> primary                                 Offline
>> secondary                               Online, Local, rgmanager
>>
>> Service Name         Owner (Last)                   State
>> ------------         --------------                --------
>> Service              secondary                     started
>
> This looks like a typical split brain condition.
> http://sources.redhat.com/cluster/faq.html#split_brain
>
> Is this only happening when you use bonding?
>
> Make sure that both nodes are able to communicate with each other.
> Check out the logs and configuration.If you can't figure it out then send
> "/etc/hosts", cluster config file and related logs.
>
> Neependra
>
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]