I'm currently experiencing a similar
problem with an HA NFS server that I just built on RHEL4 with GFS.
I have two different linux clients,
one running RHEL3 U8, the other running RHEL4 U5 (same as the HA NFS servers)
If I use the same standard mount options
on both clients (e.g. mount SERVER:/exportfs /mountpoint -t
nfs -o rw,noatime ) then everything works fine until I perform a failover.
At that point the RHEL 3 client is OK but the RHEL 4 client can no
longer stat the filesystem (df hangs). If I move the service back
the hung df command completes. I don't see an I/O error per say but
any copies to and from that mountpoint are inactive until I relocate the
I tried other versions of Unix and found
that all of them could stat the file system after failover except the RHEL4
U5 version. The only way round this I've found so far is to use the
udp protocol instead of tcp with version 3 nfs.
So my mount commands look something
more like this:
# mount SERVER:/exportfs /mountpoint
-t nfs -o rw,noatime,udp,nfsvers=3
I dont know if you can tolerate udp
in your environment but it might be worth playing around with.
kieran JOYEUX <kjoyeux jouy inra fr> Sent by: linux-cluster-bounces redhat com
08/16/2007 03:15 AM
Please respond to
linux clustering <linux-cluster redhat com>
Linux-cluster redhat com
[Linux-cluster] NFS failover
I am implementing a two node cluster sharing via NFS, their local
storage to one client.
At the moment, i am simulating a failover during a copy from the NFS
server to the local client disk.
The first time i got a NFS file handle error. I tried to use a
Filesystem ID (fsid) on the mount parameter of the client but now here
is my issue: