nfs mount locally exported directories in a Red HAt Cluster (v3)?

evdhn at advalvas.be evdhn at advalvas.be
Fri Feb 11 23:28:25 UTC 2005


> evdhn at advalvas.be wrote:
>>>>>>>>In a Red Hat HA Cluster (v3), how can you nfs mount a directory
>>>>>>>>exported
>>>>>>>>by a cluster service that is currently local to the cluster member?
>>>>>>>>I keep getting permission denied errors.  It works fine when I move
>>>>>>>>the
>>>>>>>>cluster service to the other cluster member.  The firewall has been
>>>>>>>>stoppped, just to make sure it wasn't part of the problem.
>>>>>>>
>>>>>>>Make sure you have portmapper and nfslock running on both the client
>>>>>>>and
>>>>>>>server.  That's the most common problem.
>>>>>>>----------------------------------------------------------------------
>>>>>>>- Rick Stevens, Senior Systems Engineer     rstevens at vitalstream.com
>>>>>>> -
>>>>>>>----------------------------------------------------------------------
>>>>>>
>>>>>>Thanks for your answer Rick, but apparently I explained poorly.
>>>>>>
>>>>>>This is a two-member cluster with hostnames arneb and nihal and
>>>>>>services
>>>>>>lepusbb and lepustl.
>>>>>>When service lepusbb runs on arneb, I cannot nfs mount its device on
>>>>>>arneb
>>>>>>using the service name.  I.e.
>>>>>># mount -t nfs lepusbb:/usr/local/lepus-bb /usr/local/bb
>>>>>>does not work on arneb, but it does work on nihal.
>>>>>>Similarly, I can mount lepustl:/usr/local/lepus-tl on arneb but not
>>>>>> on
>>>>>>nihal.
>>>>>>
>>>>>>I meanwhile found that
>>>>>># mount -t nfs arneb:/usr/local/lepus-bb /usr/local/bb
>>>>>>does work, so I'm currently using that as a workaround, but
>>>>>> obviously,
>>>>>>when a failover would occur while the devices are locally mounted, my
>>>>>>scripts wil be in trouble.
>>>>>>
>>>>>>And yes, portmapper and nfslock are running:
>>>>>># ps auxw | egrep -e lock -e portm | grep -v grep
>>>>>>rpc       3109  0.0  0.0  1672  608 ?        S    10:36   0:00
>>>>>> portmap
>>>>>>root      3314  0.0  0.0     0    0 ?        SW   10:36   0:00
>>>>>> [lockd]
>>>>>>root      3453  0.0  0.0  3064 3064 ?        S<L  10:36   0:00
>>>>>>/usr/sbin/clulockd
>>>>>
>>>>>First, are you sure the device is being exported by arneb and that
>>>>> arneb
>>>>>is permitted to mount it?  A "showmount -e arneb" will show what's
>>>>> being
>>>>>exported and who's allowed to look at it.
>>>>>
>>>>>I'm not familiar with RH's HA (high availability) structure so I can't
>>>>>speak to the details, but generally it's something along those lines.
>>>>>----------------------------------------------------------------------
>>>>>- Rick Stevens, Senior Systems Engineer     rstevens at vitalstream.com -
>>>>>----------------------------------------------------------------------
>>>>
>>>>
>>>>It must be a RH HA thing then.
>>>>
>>>># showmount -e arneb
>>>>Export list for arneb:
>>>>/usr/local/lepus-bb                          arneb,nihal
>>>>/usr/local/lepus-bb/server                   gienah,albireo,sair,ruchba
>>>>/usr/local/lepus-bb/content                  gienah,albireo,sair,ruchba
>>>>/usr/local/lepus-bb/sessions                 gienah,albireo,sair,ruchba
>>>>/usr/local/lepus-bb/course_image_main_images gienah,albireo,sair,ruchba
>>>>
>>>>Even when I specify that all nodes ("*") should have access, it still
>>>>won't work. :-(
>>>
>>>I'm not certain what HA means by "service", but generally, NFS mounts
>>>specify the remote end as "nodename:/export", not as "service:/export".
>>>Try:
>>>
>>>	mount -t nfs arneb:/usr/local/lepus-bb /usr/local/bb
>>>
>>>on nihal and see if that works.  It should.  Also make sure that the
>>>server (arneb) is running nfsd.  The existance of an exports file should
>>>trigger it.
>> A service (= linux, Tru64 terminology, = resource group on aix, =
>> packages
>> on HP UNIX if I remember correctly) roughly is a collection of a virtual
>> ip-address (alias), disk devices, and application processes, that can be
>> accessed through the virtual ip-address.
>> In my current situation, arneb and nihal are hostnames, lepusbb is an
>> alias (normally on eth0:1 when the service runs on arneb).
>
> Do the aliases resolve either through DNS or via /etc/hosts?  Remember
> that DNS has no concept of "service".  It knows hosts and exports (as
> shown by the "showmount -e" command).  On nihal, try both of these
> commands
>
> 	showmount -e lepusbb
> 	showmount -e arneb
>
> See if the lists they show are different based on the two aliases.
>
>> RH HA is explained in:
>> http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/pdf/rh-cs-en.pdf
> ----------------------------------------------------------------------
> - Rick Stevens, Senior Systems Engineer     rstevens at vitalstream.com -
> ----------------------------------------------------------------------
The aliases resolve through /ets/hosts by virtue of /etc/nsswitch.conf,
but dns knows them as well.

And with lepusbb running on arneb, the two showmounts return exactly the
same result on nihal:

# showmount -e lepusbb
Export list for lepusbb:
/usr/local/lepus-bb                          arneb,nihal
/usr/local/lepus-bb/server                   gienah,albireo,sair,ruchba
/usr/local/lepus-bb/content                  gienah,albireo,sair,ruchba
/usr/local/lepus-bb/sessions                 gienah,albireo,sair,ruchba
/usr/local/lepus-bb/course_image_main_images gienah,albireo,sair,ruchba
# showmount -e arneb
Export list for arneb:
/usr/local/lepus-bb                          arneb,nihal
/usr/local/lepus-bb/server                   gienah,albireo,sair,ruchba
/usr/local/lepus-bb/content                  gienah,albireo,sair,ruchba
/usr/local/lepus-bb/sessions                 gienah,albireo,sair,ruchba
/usr/local/lepus-bb/course_image_main_images gienah,albireo,sair,ruchba

dns needn't know about services.  They are basically virtual hosts, that
run on physical hosts, but can be failed over to another physical host in
case of problems (usually hardware failure - but it can also come in handy
when you need to upgrade the OS, or replace the physical hardware e.g. to
something more powerful).
To everyone and everything using the service (in this case nfs devices) a
service is  indistinguishable from a regular host.  You can ping it, you
can log on to it, etc.
When the nfs service fails over to another cluster member (i.e. another
physical host), the clients may experience a short delay (as in seconds)
in response time, but otherwise won't notice the difference.

emma



----------------------------------------------------------------------------------
Plaats je zoekertjes GRATIS op AdValvas
Placez votre petite annonce GRATUITEMENT sur AdValvas
http://www.advalvas.be




More information about the Redhat-install-list mailing list