[Linux-cluster] Re: Linux-cluster Digest, Preventing LVM from concurrent access

rmicmirregs rmicmirregs at gmail.com
Mon Aug 24 12:06:38 UTC 2009


Hi Edson

El dom, 23-08-2009 a las 16:01 -0300, Edson Marquezani Filho escribió:
> On Sun, Aug 23, 2009 at 15:20, David
> Hollister<davidhollister at comcast.net> wrote:
> > We use HA-LVM in environments where we do not use GFS. It works well in preventing shared storage from being mounted on both nodes simultaneouly. However I am not sure how it would react in a split brain conditon, which appears to be what you are describing. . You should probably consider creating redundancy in your heartbeat via IP Bonding. Basically having two nics on each host bound to one ip.
> 
> Indeed, I won't use GFS. Once I don't need any volume to be mounted at
> diferent points simultaneously, there is no reason to use it, and I
> want to keep my system as simple as possible for my needs.
> 
> When you say that HA-LVM prevents shares storage from being mounted on
> more than one node, do you mean that I could not do that, even
> manually? Or you mean that this won't be done by the cluster
> infrastrucuture? AFAIK, when using HA-LVM, the LVM locking keeps
> local, right? So, that's the problem to me, because some unware people
> could break things.
> 
> I have already thought about bonding two interfaces, but the simple
> idea of having cables disconnected still sounds very dangerous to me.
> Or even if I get master server restarted, when it is back, it would
> try to launch virtual machines, but them would be mount on slave. Do
> you understand?
> 
> I think that's not the correct way for doing what I want, and I'm not
> getting how to do this.
> 
> How would you do that? (Forget about Heartbeat, it was just the
> easiest way I have found for doing this, but what I need is things
> working well, it don't matter how. But, I want to keep things not more
> complex than necessary.)
> 
> Thank you.
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

You have some interesting and different ideas here:

1.- Shared storage with non-clustered filesystem: i have no experience
with HA-LVM, i have just checked its knowledge base on RHEL to refresh
my ideas.

If i'm not wrong, what you want is to have a shared storage with
protection from concurrent mounts for your non-clustered filesystems,
but EVEN for admins doing, by hand, erratic mounts, am i right? 

You can use a different approach to HA-LVM with CLVMD and a resource
script i have developed called "lvm-cluster". You can find it here:

https://www.redhat.com/archives/cluster-devel/2009-June/msg00065.html

It's not included in the main project yet, but some people in the list
are testing it successfully.

Actually it would not avoid an admin doing, by hand, an erratic mount
but a bugzilla ticket has been opened by Brem Belguebli to fix an LVM
issue that is causing that behaviour. Apart from this admin issue, the
script would protect the concurrent mount of your filesystem by the
usage of the LVM layer. 

I do like more this approach because you don't need to generate new
initrd images for your nodes every time you configure new volumes to be
into the HA-LVM.

2.- Communications interruption and split-brain: you should use some
fencing system to make the 2nd node "kill" the main one and take the
resources. 

Having the main node booted back again AND the comms not reestablished
will cause you a split-brain again, with a new fencing action. I usually
do not set cluster services (CMAN, RGMANAGER, etc.) to be automatically
started on the nodes. I mean they must be re-run and activated by an
admin.

In the other hand you can use a Quorum Disk, provided my CMAN too, as
the 3rd vote on your cluster and keep the quorum. I use both approaches
at the same time (no auto-start of cluster services and qdisk).

3.- Communications channels: there's no way to set on CMAN more than one
comm channel, but you can set more than one network interface to be part
of that channel, via bonding. I think this is one of the lacks of CMAN
actually.

I hope this helps. Just ask whatever you want.

Cheers,

Rafael 

-- 
Rafael Micó Miranda




More information about the Linux-cluster mailing list