[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] GFS 6.0.2-24 + NFS (ALSO)

Also, on this same cluster, when using the "gulm stonith" fencing module in 
clumanager, I get errors generated by ...

log_err("Protocol Mismatch: We're %#x and They're %#x\n",   GIO_WIREPROT_VERS, 
x_proto); which, by looking at the surrounding code seems to indicate the the 
fence device login is failing. I am trying to fence using fence_ilo against 
DL360's with iLO firmaware 1.64. My config looks something like this...

fence_devidces {
	iLO_1 {


and in the nodes file I reference the fence like this..


		fence {
			iLO {

I pass no options since the only option I use is "off" and it is defined in 
the fence.ccs file.

Any ideas as to what might be causing this?


On Saturday 26 February 2005 11:57, Corey Kovacs wrote:
> I have a 5 node cluster running GFS 6.0.2-24 with kernel 2.4.21-27.0.1 on
> RHASu4. I have three GFS filesystems 20GB, 40GB and ~1.8TB mounted from an
> MSA1000 SAN. The large partion is being re-exported via NFS. When copying a
> large file (~450GB) to the nfs re-exported GFS filesystem, the filesystem
> system hangs across all nodes. When the offending node is shutdown (never
> gets fenced and I am using fence_ilo) the system "wakes up". The nodes are
> DL360's with 2GB of ram each. The are using qlogic 2340 fibre cards and
> redhat branded drivers. Three of the 5 nodes are configured as lock
> managers. I've seen messages about lock_gulm not freeing mem. Are there
> issues with NFS and GFS together? What things should be done to tune such a
> configuration? Any help would be greatly appreicated.
> Corey
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> http://www.redhat.com/mailman/listinfo/linux-cluster

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]