[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Fencing using brocade



Mike

If I understand your description correctly, you have all your nodes
connected into a FC hub.
This hub is then connected to one port of the Brocade FC switch.
So all the nodes are on a single public Arbitrated Loop.
I assume that all the FC-connected storage is on another port on the Brocade?

I can see one potential problem with this setup. If the fencing is
done by disabling the port on the Brocade the entire loop will be
disconnected from the switch.
So instead of fencing one node the entire loop (containing all nodes)
will be fenced. (Cut off from the storage)

Only way I can see this work is to configure the fencing work with the
wwnn/wwpn of the nodes instead of the port on the Brocade.
Instead of having a fencing operation block all traffic on a given
Brocade port you need to have the Brocade block traffic to a given
wwnn/wwpn (the wwnn/wwpn of the FC-HBA of the node to be fenced)

I have not played with such a setup for a number of years, so I can't
really tell you how this should be done.

And of course, if you have the storage connected to the same FC-hub,
this won't work at all. In that case the traffic between the storage
and the nodes would not be controlled by the Brocade at all...

This should at least point out a potential problem :-)

Erling

On 9/14/06, isplist logicore net <isplist logicore net> wrote:
In my case, the nodes are connected to a hub, which is in turn connected to
the brocade. Do I just use the brocade's port still?

I have not been able to find clear information on building a proper
cluster.conf file either so have bits of this and that.

This is what I've got... you're sample and the bits and pieces I've been
using.

<?xml version="1.0"?>
<cluster config_version="40" name="vgcomp">
    <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
    <clusternode name="cweb92.xxx.com" nodeid="92" votes="1"/>
    <clusternode name="cweb93.xxx.com" nodeid="93" votes="1"/>
    <clusternode name="cweb94.xxx.com" nodeid="94" votes="1"/>
    <clusternode name="dev.xxx.com" nodeid="99" votes="1"/>
    <clusternode name="qm247.xxx.com" nodeid="247" votes="1"/>
    <clusternode name="qm248.xxx.com" nodeid="248" votes="1"/>
    <clusternode name="qm249.xxx.com" nodeid="249" votes="1"/>
    <clusternode name="qm250.xxx.com" nodeid="250" votes="1"/>
</clusternodes>
        <cman/>
<fencedevices>
    <fencedevice agent="fence_brocade" ipaddr="x.x.x.x" login="user"
name="brocade" passwd="xxx"/>
</fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

Mike


On Wed, 13 Sep 2006 17:42:39 +0200, Frank Hellmann wrote:
> Hi!
>
> I can only recommend the system-config-cluster GUI, but if you feel brave
> enough you can do it by hand
>
> This example is for a sanbox2, but it should get you going:
>
>   ...
>        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
>        <clusternodes>
>                <clusternode name="clusty1" votes="1">
>                        <fence>
>                                <method name="1">
>                                        <device name="sanbox" port="0"/>
>                                </method>
>                        </fence>
>                </clusternode>
>                <clusternode name="clusty2" votes="1">
>                        <fence>
>                                <method name="1">
>                                        <device name="sanbox" port="1"/>
>                                </method>
>                        </fence>
>                </clusternode>
>                        ....
>        </clusternodes>
>        <fencedevices>
>                <fencedevice agent="fence_sanbox2" ipaddr="xxx.xxx.xxx.xxx"
> login="username" name="sanbox" passwd="password"/>
>        </fencedevices>
>   ...
>
> And don't forget to check the fence_brocade manpage for your brocade switch
> for further options...
>
>       Cheers,
>
>                Frank...
>
> isplist logicore net wrote: >  >  >  I want to use my brocade switch as the
> fencing device for my cluster. I cannot find any documentation showing what
> I need to set up on the brocade itself and within the cluster.conf file as
> well to make this work.
>
>>> The system-config-cluster application supports brocade fencing. It is a
>>> two part process - first you define the switch as a fence device; type
>>> brocade, then you select a node an click "Manage fencing for this node"
>>> and declare a fence instance.
>> Ah, I'm at the command line :). So, there is nothing I need to do on the
>> brocade itself then? The cluster ports aren't connected directly, they
>> are connected into a compaq hub, then the hub is connected into the
>> brocade. The brocade seems to know about the external ports however since
>> they are listed when I look on the switch. As for the conf file, I've not
>> found enough information on how to build a good conf file so know this
>> one is probably not even complete. Been working on other parts of the
>> problems then wanting to get to this. <?xml version="1.0"?> <cluster
>> config_version="40" name="vgcomp"> <fence_daemon clean_start="1"
>> post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode
>> name="cweb92.companions.com" nodeid="92" votes="1"/> <clusternode
>> name="cweb93.companions.com" nodeid="93" votes="1"/> <clusternode
>> name="cweb94.companions.com" nodeid="94" votes="1"/> <clusternode
>> name="dev.companions.com" nodeid="99" votes="1"/> <clusternode
>> name="qm247.companions.com" nodeid="247" votes="1"/> <clusternode
>> name="qm248.companions.com" nodeid="248" votes="1"/> <clusternode
>> name="qm249.companions.com" nodeid="249" votes="1"/> <clusternode
>> name="qm250.companions.com" nodeid="250" votes="1"/> </clusternodes>
>> <cman/> <fencedevices> <fencedevice agent="fence_brocade"
>> ipaddr="x.x.x.x" login="xxx" name="brocade" passwd="xxx"/>
>> </fencedevices> <rm> <failoverdomains/> <resources/> </rm> </cluster> --
>> Linux-cluster mailing list Linux-cluster redhat com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
> -- -------------------------------------------------------------------------
> - Frank Hellmann Optical Art GmbH Waterloohain 7a DI Supervisor
> http://www.opticalart.de 22769 Hamburg frank opticalart de Tel: ++49 40
> 5111051 Fax: ++49 40 43169199




--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster



--
-
Mac OS X. Because making Unix user-friendly is easier than debugging Windows


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]