[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] fence device: network card?



Hi Karl,

GNDB is a block server, like ISCSI.  Unfortunately there does not appear to be any standard fencing mechanism for ISCSI.  I hacked up one of the existing fence agents to use SNMP to turn off the network ports in a Cisco 3750 switch.  My test systems are using an old HP switch that the network team was not using -- it works also.

System-config-cluster would not allow me to specify my own fence agent.  Well, I looked quickly, did not see any thing obvious, gave up and edited cluster.conf with vi.

I have attached fence_cisco to this message, and here are some notes to use it.  You may need to get the Net-SNMP perl module from CPAN.org.

The config file for fence_cisco looks like this:
community:<PUT YOUR SNMP COMMUNITY STRING HERE>
switch:10.1.4.254
oneoften:A1:C1
twooften:A5:C2
threeoften:A2:C3
The first line is your community string, the second line is the IP address of the network switch, the rest are the hosts.  The first column is the host name followed by a colon separated list of the ports that the host is attached to on the Ethernet switch.  In the cluster.conf file, the port parameter must match an entry in the host name column. 
<?xml version="1.0"?>
<cluster alias="DEV_ACN" config_version="11" name="DEV_ACN">
        <clusternodes>
                <clusternode name="oneoften_a.devstorage.local" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ciscofence1" port="oneoften" switch="do not know"/>
                                </method>
                        </fence>
                        <multicast addr="224.0.0.1" interface="eth1"/>
                </clusternode>
                <clusternode name="twooften_a.devstorage.local" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ciscofence1" port="twooften" switch="do not know"/>
                                </method>
                        </fence>
                        <multicast addr="224.0.0.1" interface="eth1"/>
                </clusternode>
                <clusternode name="threeoften_a.devstorage.local" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ciscofence1" port="threeoften" switch="do not know"/>
                                </method>
                        </fence>
                        <multicast addr="224.0.0.1" interface="eth1"/>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_cisco" ipaddr="10.1.4.254" login="testing" name="ciscofence1" passwd="password"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <cman>
                <multicast addr="224.0.0.1"/>
        </cman>
</cluster>

The fence_cisco agent will 'fence a nic' in the network switch.

I have attached fence_cisco to this message.  I would suggest that you test every thing carefully.

Matt

On Tue, 2007-06-12 at 16:30 -0700, Karl R. Balsmeier wrote:
Hi,

I have three (3) servers built and entered into the 
system-config-cluster tool as nodes.  Basically the first node has node 
2 and node 3 as members of the cluster.

For a fence device, I do not have any of the SAN or network/switch 
devices listed in the dropdown menu, and where I have read in the 
documentation that says "gnbd" Generic Network Block Device seems to be 
what i'm looking for.

Basically I read in the docs you can use a NIC card as a fence device, 
is this true?

Right now each of the 3 servers have 3 NICs, so I have a total of 9 to 
play with.  Right now I am bonding the two GB NIC's together no 
problem.  That leaves each server a 100mbps NIC.

My ultimate goal is to use these 3 machines to make a Vsftpd GFS cluster 
that I can run Iscsi over. 

Being new to this though, i'll stick to the primary questions:  How does 
one configure a fence device in the form of a NIC card?  Is the gnbd 
item relevant to this?

-karl


--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster

Attachment: fence_cisco
Description: Perl program


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]