[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: SV: SV: [Linux-cluster] Linux Clustering Newbe




Kristoffer,

Thanks for all the help. One last question....

Instead of GFS what do you think of OCFS2 from oracle...it is opensource It appears to accomplish the same funtion as GFS (I am still reading the material).






"Kristoffer Lippert" <kristoffer lippert jppol dk>
Sent by: linux-cluster-bounces redhat com

07/20/2007 08:36 AM

Please respond to
linux clustering <linux-cluster redhat com>

To
"linux clustering" <linux-cluster redhat com>
cc
Subject
SV: SV: [Linux-cluster] Linux Clustering Newbe





Hi,
 
A fence device is a device that can "build a fence" around a node, and thus keep it from corrupting a shared filesystem.
Most commonly i think a powerswitch is used. It simply cuts the power to the decfunct server.
It looks and works like this:
http://www.wti.com/guides/rpb115ug.htm
(but there are ofcourse loads of brands avaliabel - No, I don't work for WTI ;-)
 
Alternatively there is the option of "cutting" the network from the server (that would be the network between server and disks), using a switch. I have not tried that method of fencing, so someone else might be able to fill in.

Depending on the topology of your cluster (are your disks connected thrugh a dedicated fiber net or for instance iscsi.) the need to "cut" the defective server off from the disks will be different.

 
So in short, A fence device can be many things. :-)
 
Hope it make sence
/Kristoffer
 


Fra: linux-cluster-bounces redhat com [mailto:linux-cluster-bounces redhat com] På vegne af Dan Askew jmsmucker com
Sendt:
20. juli 2007 14:26
Til:
linux clustering
Emne:
Re: SV: [Linux-cluster] Linux Clustering Newbe



Could you elaborate on the fence device.  What would suggest using ?





"Kristoffer Lippert" <kristoffer lippert jppol dk>
Sent by: linux-cluster-bounces redhat com

07/20/2007 08:13 AM

Please respond to
linux clustering <linux-cluster redhat com>


To
"linux clustering" <linux-cluster redhat com>
cc
Subject
SV: [Linux-cluster] Linux Clustering Newbe







Hi,

 

You need gfs for the changes to appear on both servers. With GFS, when one server changes a file, the other server is made aware of the changes. Also GFS takes care of file locking. Also you need a fencedevice, so the cluster can shutdown a "defective" server, and make sure it dosn't corrupt the GFS.

 

For your current setup:

When you have both servers running, you could mount the ext3 fs on both servers, but only the server that writes a file, will be aware of it. The other server will be aware of the new file when you remount the fs.

 

Hope this helps a bit.

 

Kind Regards

Kristoffer

 
 


Fra: linux-cluster-bounces redhat com [mailto:linux-cluster-bounces redhat com] På vegne af Dan Askew jmsmucker com
Sendt:
20. juli 2007 14:01
Til:
linux clustering
Emne:
[Linux-cluster] Linux Clustering Newbe



Greetings all,


I am an old veteran of HP-UX Service Guard. I am trying to get a NFS linux Cluster working and need some advise.


I have read the NFS Cookbook from Redhat and have a the following working


2 Node Linux Cluster  (RHEL AS 4.0 update 5)


one test disk LVM formated ext3 (have not decided onGFS or not)


Use Vitual IPAddress to access the disks via NFS


When SYSTEMA  runs the service and the client machine access the disk and makes changes. Then I fail over to SYSTEMB these changes made by the client are not present.

I am runing CLVMD deamon
The LVM disks are mounted on both systems

I have made the following changes to LVM.CONF


(I have tried locking_type = 2 and locking_type = 3) both have he same results. (as above)


Sorry for my ignorance but can anyone tell me what I am doing wrong...would GFS solve the syncing problem?





--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]