Gordan Bobic wrote:
GFS will take care of that for you. DRBD directs reads to nodes that
are up to date until everything is in sync.
Make sure that in drbd.conf you put in a stonith parameter pointing at
your fencing agent with suitable parameters, and set the timeout to
slightly less than what you have it set in cluster.conf. That will
ensure that you are protected from the race condition where DRBD might
drop out but the node starts heartbeating between then and when the
fencing timeout occurs.
Oh, and if you are going to use DRBD there is no reason to use LVM.
This is interesting approach. I understand that DRBD with GFS2
doesn't require LVM between, but it does bring some inflexibility:
What is main reason for you not to use LVM on top of DRBD? Is it just
that you didn't require benefits it brings? Or, it makes more problems
by your opinion?
- For each logical volume, one has to setup separate DRBD
- Cluster wide logical volume resizing not easy
- No snapshot - this is very important to me for MySQL backups.