[Linux-cluster] Simplest 4 node GFS 6.1 cluster
Lon Hohberger
lhh at redhat.com
Thu Jun 23 15:00:56 UTC 2005
On Thu, 2005-06-23 at 08:43 -0500, Troy Dawson wrote:
> This idea of fencing is what's throwing me off. If I'm reading things
> right, I can't do group GFS without them being in a cluster, and they
> can't be in a cluster without doing fencing. But the fencing seems to
> just allow the various machines to take over for one another.
Actually - fencing prevents hung rogue nodes from being able to corrupt
the file system.
Believe it or not, power-cycle fencing actually can help more than you
think: A node without power can't flush buffers, so after the node is
fenced, you can have it turned back on. If it was a software failure,
your cluster will resume all normal operations without any manual
intervention.
> I also don't have access to the SAN switch, other than my machines plug
> into it. It's essentially a black box. These machines also don't have
> any way to remotely turn power on an off.
:(
> Is GFS what I really want? I've tried just standard ext3, but I was
> getting a caching problem with my read only machines. Do I just want to
> try and fix my caching problem?
You'll probably need to do synchronous I/O on all nodes. This will
likely be slow, but I think your limiting factor will be network
bandwidth, not disk I/O times.
Note that GFS was designed to prevent "hot spots": places on the disk
which are accessed over and over -- like an inode table on ext3, for
example. Overuse of hot-spots can cause premature failure of drives.
Just things to consider. You can probably do it without GFS, but I
wouldn't recommend it. Remote power control does not have to be
expensive. E.g.:
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&category=86723&item=5783773234&rd=1&ssPageName=WDVW
That's cheaper than replacing ONE enterprise-grade SCSI or FC disk.
(Disclaimer: I have no affiliation whatsoever with the seller.)
-- Lon
More information about the Linux-cluster
mailing list