[Linux-cluster] fence device: network card?

Matthew B. Brookover mbrookov at mines.edu
Thu Jun 14 14:37:14 UTC 2007


Hi Karl,

GNDB is a block server, like ISCSI.  Unfortunately there does not appear
to be any standard fencing mechanism for ISCSI.  I hacked up one of the
existing fence agents to use SNMP to turn off the network ports in a
Cisco 3750 switch.  My test systems are using an old HP switch that the
network team was not using -- it works also.

System-config-cluster would not allow me to specify my own fence agent.
Well, I looked quickly, did not see any thing obvious, gave up and
edited cluster.conf with vi.

I have attached fence_cisco to this message, and here are some notes to
use it.  You may need to get the Net-SNMP perl module from CPAN.org.

The config file for fence_cisco looks like this:

community:<PUT YOUR SNMP COMMUNITY STRING HERE>
switch:10.1.4.254
oneoften:A1:C1
twooften:A5:C2
threeoften:A2:C3

The first line is your community string, the second line is the IP
address of the network switch, the rest are the hosts.  The first column
is the host name followed by a colon separated list of the ports that
the host is attached to on the Ethernet switch.  In the cluster.conf
file, the port parameter must match an entry in the host name column.  

<?xml version="1.0"?>
<cluster alias="DEV_ACN" config_version="11" name="DEV_ACN">
        <clusternodes>
                <clusternode name="oneoften_a.devstorage.local" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ciscofence1" port="oneoften" switch="do not know"/>
                                </method>
                        </fence>
                        <multicast addr="224.0.0.1" interface="eth1"/>
                </clusternode>
                <clusternode name="twooften_a.devstorage.local" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ciscofence1" port="twooften" switch="do not know"/>
                                </method>
                        </fence>
                        <multicast addr="224.0.0.1" interface="eth1"/>
                </clusternode>
                <clusternode name="threeoften_a.devstorage.local" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ciscofence1" port="threeoften" switch="do not know"/>
                                </method>
                        </fence>
                        <multicast addr="224.0.0.1" interface="eth1"/>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_cisco" ipaddr="10.1.4.254" login="testing" name="ciscofence1" passwd="password"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <cman>
                <multicast addr="224.0.0.1"/>
        </cman>
</cluster>


The fence_cisco agent will 'fence a nic' in the network switch.

I have attached fence_cisco to this message.  I would suggest that you
test every thing carefully.

Matt

On Tue, 2007-06-12 at 16:30 -0700, Karl R. Balsmeier wrote:

> Hi,
> 
> I have three (3) servers built and entered into the 
> system-config-cluster tool as nodes.  Basically the first node has node 
> 2 and node 3 as members of the cluster.
> 
> For a fence device, I do not have any of the SAN or network/switch 
> devices listed in the dropdown menu, and where I have read in the 
> documentation that says "gnbd" Generic Network Block Device seems to be 
> what i'm looking for.
> 
> Basically I read in the docs you can use a NIC card as a fence device, 
> is this true?
> 
> Right now each of the 3 servers have 3 NICs, so I have a total of 9 to 
> play with.  Right now I am bonding the two GB NIC's together no 
> problem.  That leaves each server a 100mbps NIC.
> 
> My ultimate goal is to use these 3 machines to make a Vsftpd GFS cluster 
> that I can run Iscsi over. 
> 
> Being new to this though, i'll stick to the primary questions:  How does 
> one configure a fence device in the form of a NIC card?  Is the gnbd 
> item relevant to this?
> 
> -karl
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070614/a9f84875/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fence_cisco
Type: application/x-perl
Size: 10617 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070614/a9f84875/attachment.pl>


More information about the Linux-cluster mailing list