Hi All , |
Just want mention one point. I am using Ext3 filesystem in below
On 03/15/2011 05:43 PM, jayesh.shinde wrote:
Hi All ,
I don't have SAN with me , so I want to build the 2 node DRBD
active active for mysql & http resource ( i.e /dev/drbd2
& /dev/drbd3 in my case) with RHCS .
I configured the require setup from http://sourceware.org/cluster/wiki/DRBD_Cookbook
and from DRDB links.
From last 1 week I am
testing the same scenario in 2 XEN vms with kenel
2.6.18-128.el5xen , Every thing is working fine , like mysql and http
services move from one server to other etc... But not working correctly
when it get fence ( i.e when n/w fail on of the node).
I am facing the split-brain problem. I search a lot in
google and mailling list but don't found the proper correct
solution and suggestion.
For Fence testing I am doing following.
1) On node1 http service is running with /dev/drbd2
2) On node2 mysql service is runing with /dev/drbd3
At this movement the DRBD primary-primary status is working
3) Now On node2 When I stop n/w service manually by "service
network stop" then within 3-5 sec. node2 get fence properly
and mysql service get switch on node1 properly.
4) After fencing when node2 come up , then I am facing the DRBD
split-brain issue with node1 and node2.
My questions :--
1) Why DRBD Split brain is not coming when I reboot or shutdown
or destroy the machine by xm command
i.e "xm reboot <node1/node2>" OR xm
shutdown <node1/node2> OR xm destroy
2) Why the DRBD split brain issue come at the time of fencing
node only ?
3) Is the combination of DRBD active-active + RHCS is stable ?
and workable solution
Because one of the below mailling list I found it's workable
4) In fencing Is there any extra setting require for such
5) Do I need to use any custom fencing logic ?
6) Any one using such "DRBD active-active + RHCS" setup in Live
without split brain issue ?
Please guide and suggest on the same.