[Linux-cluster] GFS+DRBD+Quorum: Help wrap my brain around this

Colin Simpson Colin.Simpson at iongeo.com
Sun Nov 21 21:46:03 UTC 2010


I suppose what I'm saying is that there is no real way to get a quorum
disk with DRBD. And basically it doesn't really gain you anything
without actual shared storage.

Now as I say, I'm pretty new to all this, but I've not seen anyone try
to setup a 3rd node for quorum with DRBD.

The scenario is well mitigated by DRBD on two nodes already without
this. The system will not, if you config properly,  start DRBD (and all
the cluster storage stuff after, presuming your start up files are in
the right order) until it sees the second node. The one that has the
newest data will be used for all requests to either nodes and will sync
the older node. No need to force it to outdate the data manually on a
node, it should so that itself, with the right options.

The situation of two nodes coming up when the out of date one comes up
first should never arise if you give it sufficient time to see the other
node (it will always pick the new good one's data), you can make it wait
forever and then require manual intervention if you prefer (should a
node be down for an extended period). For me a couple of minutes waiting
for the other node is sufficient if it was degraded already, maybe a bit
longer if the DRBD was sync'd before they went down. 

I can send you config's I believe are correct from the Linbit docs of
using DRBD Primary/Primary with GFS, if you like. 

But I'm told (from a thread I posted at DRBD) that this should always
work. You have to work a bit to break this, particularly with waiting
forever for the other DRBD node. You would have to bring up a outdated
node and tell it to proceed when waiting for the other node. 

Make any sense.

Colin

On Fri, 2010-11-19 at 20:22 +0000, Andrew Gideon wrote:
> 
> I've given this a little more thought.  I'm not sure if I'm thinking
> in
> the proper direction, though.
> 
> If cluster quorum is preserved despite A and B being partitioned, then
> one of A or B will be fenced (either cluster fencing or DRBD fencing).
> This would be true whether quorum is maintained with a third node or a
> quorum disk.
> 
> More, to avoid the problem described a couple of messages back (A
> fails,
> B fails, A returns w/o knowing that B has later data), the fact that B
> continued w/o A needs to be stored somewhere.  This can be done either
> on a quorum disk or via a third node.  Either way, the fencing logic
> would make a note of this.  For example, if A were fenced then that
> bit
> of extra storage (quorum disk or third node) would reflect that B had
> continued w/o A and that B therefore had the latest copy of the data.
> 
> When A or B returns to service, it would need to check that additional
> storage.  If a node determines that its peer has the later data, it
> can
> invoke "drbdadm outdate" on itself.
> 
> Doesn't this seem reasonable?  Or am I misthinking it somehow?
> 
>         - Andrew
> 
> 
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 

This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed.  If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.






More information about the Linux-cluster mailing list