[Linux-cluster] Re: NFSCookbook Redhat 5 Cluster

dennis at demarco.com dennis at demarco.com
Thu Jul 12 12:50:12 UTC 2007


It's listed on the cluster FAQ. It's last updated in March. It's a bit 
hard to follow and not correct in some spots, but good starting point.

- Dennis



http://sources.redhat.com/cluster/doc/nfscookbook.pdf - The Unofficial 
NFS/GFS Cookbook.


On Thu, 12 Jul 2007, Leo Pleiman wrote:

> Dennis,
>
> Sorry I missed your original post. Can you send/resend the location of the 
> nfs cookbook white paper?
>
> Thanks!
>
> dennis at demarco.com wrote:
>> 
>> It's most likely bad ettique to reply to your own post, but I have figured 
>> out the issue.
>> 
>> If anyone is interested, here is a cluster.conf that seems to work w/ the 
>> nfscookbook whitepaper method.
>> 
>> It's a three node xen, running nfs services w/ active active configuration 
>> as a test.
>> 
>> - Dennis
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> <?xml version="1.0"?>
>> <cluster alias="cluster1" config_version="122" name="cluster1">
>>         <fence_daemon clean_start="0" post_fail_delay="0" 
>> post_join_delay="12"/>
>>         <clusternodes>
>>                 <clusternode name="node03.internal.lan" nodeid="1" 
>> votes="1">
>>                         <fence>
>>                                 <method name="1">
>>                                         <device domain="node03" 
>> name="xen-fence"/>
>>                                 </method>
>>                         </fence>
>>                 </clusternode>
>>                 <clusternode name="node01.internal.lan" nodeid="2" 
>> votes="1">
>>                         <fence>
>>                                 <method name="1">
>>                                         <device domain="node01" 
>> name="xen-fence"/>
>>                                 </method>
>>                         </fence>
>>                 </clusternode>
>>                 <clusternode name="node02.internal.lan" nodeid="3" 
>> votes="1">
>>                         <fence>
>>                                 <method name="1">
>>                                         <device domain="node2" 
>> name="xen-fence"/>
>>                                 </method>
>>                         </fence>
>>                 </clusternode>
>>         </clusternodes>
>>         <cman/>
>>         <fencedevices>
>>                 <fencedevice agent="fence_xvm" name="xen-fence"/>
>>         </fencedevices>
>>         <rm>
>>                 <failoverdomains>
>>                         <failoverdomain name="perfer_1" ordered="0" 
>> restricted="0">
>>                                 <failoverdomainnode 
>> name="node01.internal.lan" priority="1"/>
>>                         </failoverdomain>
>>                         <failoverdomain name="perfer_2" ordered="0" 
>> restricted="0">
>>                                 <failoverdomainnode 
>> name="node02.internal.lan" priority="1"/>
>>                         </failoverdomain>
>>                         <failoverdomain name="perfer_3" ordered="0" 
>> restricted="0">
>>                                 <failoverdomainnode 
>> name="node03.internal.lan" priority="1"/>
>>                         </failoverdomain>
>>                 </failoverdomains>
>>                 <resources>
>>                         <ip address="192.168.1.23" monitor_link="1"/>
>>                         <ip address="192.168.1.24" monitor_link="1"/>
>>                         <ip address="192.168.1.25" monitor_link="1"/>
>>                         <nfsexport name="nfsexport1"/>
>>                         <nfsexport name="nfsexport2"/>
>>                         <nfsexport name="nfsexport3"/>
>>                         <nfsclient options="rw" name="nfsclient1" 
>> target="*"/>
>>                         <nfsclient options="rw" name="nfsclient2" 
>> target="*"/>
>>                         <nfsclient options="rw" name="nfsclient3" 
>> target="*"/>
>> 
>>
>>                         <clusterfs device="/dev/vg0/gfslv2" 
>> force_unmount="0" fsid="59408" fstype="gfs" mountpoint="/gfsdata" name="
>> gfs" options="acl"/>
>>                 </resources>
>>                 <service autostart="1" domain="perfer_1" exclusive="0" 
>> name="nfs1" recovery="relocate">
>>                         <clusterfs ref="gfs">
>>                                 <nfsexport ref="nfsexport1">
>>                                         <nfsclient ref="nfsclient1"/>
>>                                 </nfsexport>
>>                         </clusterfs>
>>                         <ip ref="192.168.1.23"/>
>>                 </service>
>>                 <service autostart="1" domain="perfer_2" exclusive="0" 
>> name="nfs2" recovery="relocate">
>>                         <clusterfs ref="gfs">
>>                                 <nfsexport ref="nfsexport2">
>>                                         <nfsclient ref="nfsclient2"/>
>>                                 </nfsexport>
>>                         </clusterfs>
>>                         <ip ref="192.168.1.24"/>
>>                 </service>
>>                 <service autostart="1" domain="perfer_3" exclusive="0" 
>> name="nfs3" recovery="relocate">
>>                         <clusterfs ref="gfs">
>>                                 <nfsexport ref="nfsexport3">
>>                                         <nfsclient ref="nfsclient3"/>
>>                                 </nfsexport>
>>                         </clusterfs>
>>                         <ip ref="192.168.1.25"/>
>>                 </service>
>>         </rm>
>> </cluster>
>> 
>
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




More information about the Linux-cluster mailing list