[Linux-cluster] 4 node cluster even split

Patrick Caulfield pcaulfie at redhat.com
Wed Apr 11 08:17:41 UTC 2007


rhurst at bidmc.harvard.edu wrote:
> I don't know if this could help you, but let me share our configuration
> as an example.  I have overridden the default votes on 2 nodes that are
> vital to application implementation of an 11-node cluster overall.  If I
> cold start the cluster, those 2 nodes running alone are enough to
> quorate, because they each carry 5 votes each for a total of 10 votes. 
> The remaining 9 nodes carry the default 1 vote each, so the cluster
> expected votes are 19.
> 
> 10 = (19 / 2) + 1
> 
> If I lose 1 of those 2 network director nodes, you lose 5 votes but you
> remain quorate, unless you lose 5 more regular nodes along with it.  If
> I lose BOTH network director nodes (10 votes), I don't care about
> quorum, because my application is dead anyways (no network directors
> managing client connections!).  But, we have a contingency plan to
> "promote" one of the failover nodes as a network director by running the
> correct services and adjusting its vote count to 5 for extra redundancy.
> 
> It would be nice to see other implementations that vary from the typical
> 1 vote per node cluster.
> 

Oh, it's not uncommon. I used to do this with a 96 node VAX cluster way-back-when!

The main reason it's not a the front of the documentation is that you really
need to know what you are doing with that sort of system. If people start
arbitrarily increasing the votes of nodes just to keep quorum they could get
themselves into some horrible data-corrupting scenarios.

-- 
Patrick

Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street,
Windsor, Berkshire, SL4 ITE, UK.
Registered in England and Wales under Company Registration No. 3798903




More information about the Linux-cluster mailing list