[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] nfs4 kerberos

I whipped up a quick NFS4 cluster in Xen after I got home, and tried to remember what I did to make it work. After I bit, it all fell back into place. This is quick and dirty, and not how I would do things in production, but it's a good start. Note that I didn't set up a shared filesystem, but that should be academic at this point

1) Create your nfs/nfsserver.mydomain keytab
2) Copy keytab to both node1 and node2
3) Modify /etc/init.d/portmap- in the start function, add "hostname nfsserver.mydomain". In the stop function, add "hostname nodeX.mydomain"
4) Drop something that looks like the attached cluster.conf file in /etc/cluster
5) Set up your exports: /exports     gss/krb5p(rw,async,fsid=0)
6) Start CMAN and RGManager
7) ?
8) Profit - mount -t nfs4 nfsserver.mydomain:/ /mnt/exports -o sec=krb5p

The trick here is that we change the hostname before any Kerberized services start, so it will be happy when it tries to read the keytab. Also, I use all Script resources instead of the NFS resource. I never really liked it, and I'm old and set in my ways. This works, and I'm certain that it reads /etc/exports. First, we set up the IP, then start each necessary daemon as a dependency for the next. I've been bouncing the service back and forth for the last 10 minutes and only suffering from a complaint of a stale NFS mount on my client whenever I failover.

<?xml version="1.0"?>
<cluster config_version="2" name="NFS">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
                <clusternode name="node1.mydomain" nodeid="1" votes="1">
                                <method name="1">
                                        <device name="Fence_Manual" nodename="node1.mydomain"/>
                <clusternode name="node2.mydomain" nodeid="2" votes="1">
                                <method name="1">
                                        <device name="Fence_Manual" nodename="node2.mydomain"/>
        <cman expected_votes="1" two_node="1"/>
                <fencedevice agent="fence_manual" name="Fence_Manual"/>
                        <failoverdomain name="NFS" ordered="0" restricted="1">
                                <failoverdomainnode name="node1.mydomain" priority="1"/>
                                <failoverdomainnode name="node2.mydomain" priority="1"/>
                        <ip address="" monitor_link="1"/>
                        <script file="/etc/init.d/portmap" name="Portmapper"/>
                        <script file="/etc/init.d/rpcgssd" name="RPCGSSD"/>
                        <script file="/etc/init.d/rpcidmapd" name="IDMAPD"/>
                        <script file="/etc/init.d/nfs" name="NFS"/>
                <service autostart="1" domain="NFS" name="NFS" recovery="relocate">
                        <ip ref="">
                                <script ref="Portmapper">
                                        <script ref="RPCGSSD">
                                                <script ref="IDMAPD">
                                                        <script ref="NFS"/>

On Wed, Apr 6, 2011 at 6:52 PM, Ian Hayes <cthulhucalling gmail com> wrote:

Shouldnt have to recompile rpc.gssd. On failover I migrated the ip address first, made portmapper a depend on the ip, rpc.gssd depend on portmap and nfsd depend on rpc. As for the hostname, I went with the inelegant solution of putting a 'hostname' command in the start functions of the portmapper script since that fires first in my config.

On Apr 6, 2011 6:06 PM, "Daniel R. Gore" <danielgore yaktech com> wrote:

I also found this thread, after many searches.

As I read through it, there appears to be a patch for rpc.gssd which
allows for the daemon to be started and associated with multiple hosts.
I do not want to compile rpc.gssd and it appears the patch is from over
two years ago.  I would hope that RHEL6 would have rpc.gssd patched to
meet this requirement, but no documentation appear to exist for how to
use it.

On Wed, 2011-04-06 at 20:23 -0400, Daniel R. Gore wrote:
> Ian,
> Thanks for the info.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]