[Linux-cluster] GFS network and fencing questions

Thomas Suiter redhat at insanegeeks.com
Wed Apr 8 21:32:03 UTC 2009


 

I'm going to be building a 6 node cluster with blade servers that only have
2x network connections attached to EMC DMX storage.  The application we are
running has it's own cluster layer so we won't be using the failover
services (they just want the filesystem to be visible to all nodes).  Each
node should be reading/writing only in it's own directory with a single
filesystem size ~15TB.

 

Questions I have are this:

 

      1) The documentation is unclear as to this, I'm assuming that I should
I bond the 2x interfaces rather than have one interface for public and one
for private.  I'm thinking this will make the system much more available in
general, but I don't know if the public/private is a hard requirement (or if
what I'm thinking is even better) Best case would be to get 2x more but
unfortunately I don't have that luxury. If this is preferred, would I need
to use 2x ip addresses in this configuration, or can I use just the 1x per
node.

 

      2) I have the capabilities to support scsi3 reservations inside the
DMX, should I be using scsi3 instead of power based fencing (or both).  It
seems like a relatively option, is it ready for use or should it bake a bit
longer. I've used Veritas VCS with scsi3 previously and it was sometimes
semi-annoying.  But the reality is that availability and data protection is
more important than not being annoyed.

 

      3) Since I have more than 2x nodes should I use qdiskd or not (or is
it even needed in this type of configuration with no failover); looking
around it appears that it's caused some problems in the past.

 

      4) Any other tips for a first time GFS user

 

Thanks

      Thomas Suiter

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090408/1f2ed78e/attachment.htm>


More information about the Linux-cluster mailing list