[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] NFS clustering, with SAN



I finally got around to reading the URL doc that I sent earlier.   It is a good doc, but the approach I used was different.  I wanted to post it here to see if it is useful, and to see if there are any problems with it.

I have fs from the SAN, as ext3 that is always available to every node, but only mounted with the service below.   When the nfsVolume service gets started on a node, it also brings it's own IP address.

Clients mount based on the IP and NFS share.   So the node offering the share is any node in the failover domain.

Below is an abbreviated cluster.conf.

Paul


<resources>
    <ip address="192.168.215.230" monitor_link="0"/>
    <nfsexport name="myexport"/>
    <nfsclient allow_recover="1" name="myclient215" options="rw,async,no_wdelay,no_root_squash" target="192.168.215.0/24"/>
    <fs device="/dev/mpath/nfsp1" force_fsck="0" force_unmount="0" fsid="45793" fstype="ext3" mountpoint="/nfsdata" name="nfs data" self_fence="0"/>
</resources>

<service autostart="1" domain="pref_as3xen" exclusive="0" max_restarts="1" name="nfsVolume" recovery="restart" restart_expire_time="600">
        <fs ref="nfs data">
           <nfsexport ref="myexport">
               <nfsclient name=" " ref="myclient215"/>
           </nfsexport>
        </fs>
        <ip ref="192.168.215.230"/>
</service>



----- Original Message -----
From: "Terry" <td3201 gmail com>
To: "linux clustering" <linux-cluster redhat com>
Sent: Wednesday, January 27, 2010 8:53:44 AM (GMT-0600) America/Chicago
Subject: Re: [Linux-cluster] NFS clustering, with SAN

On Tue, Jan 26, 2010 at 6:36 PM, Jonathan Horne <loudredz71 yahoo com> wrote:
> --- On Tue, 1/26/10, Terry <td3201 gmail com> wrote:
>
>> From: Terry <td3201 gmail com>
>> Subject: Re: [Linux-cluster] NFS clustering, with SAN
>> To: "linux clustering" <linux-cluster redhat com>
>> Date: Tuesday, January 26, 2010, 5:57 PM
>> On Tue, Jan 26, 2010 at 5:15 PM,
>> Jonathan Horne <loudredz71 yahoo com>
>> wrote:
>> > greetings, i am new to this list, my apologies if this
>> topic has been touched on before (but so far searching has
>> not yielded a direction for me to follow)
>> >
>> > i have 2 servers that share a file system over SAN.
>>  both servers export this file system as a NFS share.  i
>> need to implement some sort of clustering so that if one
>> server goes down, clients dont lose their connection to the
>> exported file system.  im hoping to find some
>> implementation that will have a pretty instantaneous
>> failover, as there will be pretty constant filewrites/reads
>> from multiple client servers.
>> >
>> > can i please get some recommendations on what i should
>> research to make this happen? ill gladly accept tips, or
>> documents to read, i can work with either.
>> >
>> > thanks,
>> > Jonathan
>> >
>>
>> Hi Jonathan,
>>
>> Welcome to the list.  I have a large nfs cluster I
>> manage.  All of the
>> configuration magic to reduce the downtime and headache is
>> on the NFS
>> client side.  When there is a failover event, there is
>> a short outage
>> period for the IP to switch over to the other node.  A
>> lot of things
>> affect this, including your switching (arp)
>> environment.  Is it a
>> single export?  If not, you can create each export as
>> a separate
>> service, with associating separate IP address and create
>> an
>> active/active type of NFS environment.  Just a
>> thought.
>>
>> Thanks,
>> Terry
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster redhat com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
> Terry, thanks for the reply.
>
> my setup is like this: 2 servers, with a shared LUN.  LUN is mounted on both as /opt/data, both have /opt/data listed in the /etc/exports file.  so basically 2 servers, and showmount -e against both of them show the exact same thing (and of course, its a given that since its a shared LUN (ofcs2 file system), both exports contain the exact same data).  right now were in the testing phase of this project, and there are 4 NFS clients that are connecting (2 oracle servers writing files to the export, and 2 weblogic servers reading the files from oracle from the export).
>
> ultimately trying to fix this up in an HA setup, so that if one NFS server drops off, the clients dont know the difference.  environment is fully switched, and all nodes (NFS servers and clients) have bonded network interfaces connected to separate switches (which connect via port-channel).
>
> you mentioned that you have setup on the client side that takes care of your failover headaches, im interested in hearing more about how this works.
>
> thanks,
> Jonathan
>
>
>
>
>
> --

The link Paul posted is a great resource.  Start there.  In short,
you're going to have downtime in the event of a failover, short, but
downtime.  Increase nodes to reduce the chances.

--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]