[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Distributed Storage in Cluster Suite

Joseph L. Casale wrote:
Or you can go active-active with GFS. :)

Ok, based on Lon Hohberger's and your suggestion, I'll mock this up.
I presume the choice on Pri/Pri w/ GFS2 overcomes management issues
imposed on the Pri/Sec role case's that would otherwise be involved.

Yes. Personally, I'm still using GFS1, but RedHat deem GFS2 stable enough for production. You may want to trial both and see what yields better performance. GFS1 is more tweakable, so some people are reporting much better performance with it for their use cases.

Looking through drbd.sh I see case's for promote/demote, I added this (the wrapper actually)
as a resource "script" with a child "fs" in a mock cluster as the underlying device
was primary on that node, it just started:) How does rhcs know when adding a "script"
to pass the required cases? How do I configure this?
I use it in active-active mode with GFS. In that case I just use the fencing agent in DRBD's "stonith" configuration so that when disconnection occurs, the failed node gets fenced.

I see the ifmib fencing agent in git, am I ok to utilize this as my single fencing
agent in this drbd pri/pri setup with gfs2? I suspect this would be effective
in my environment given the equipment I have.

Sorry, never used that fencing agent. Perhaps someone else can answer that.

Does this allow me to neglect the use of stonith and the config in the drbd.conf
file and let rhcs handle fencing solely on its own? Or is this additional layer
of fencing something needed to be done for drbd's sake, in that case I guess I
would implement both?

I implement both, just in case there is a weird transient outage and DRBD disconnects but RHCS doesn't. You lose nothing by putting it in stonith in drbd.conf, and it can save you problems later on.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]