[Linux-cluster] Running clvm on part of the cluster

Vladislav Bogdanov bubble at hoster-ok.com
Sat Jan 25 22:34:04 UTC 2014


24.01.2014 17:27, Alexander GQ Gerasiov wrote:
> //Second try, because previous mail was hold at moderation I think.
> //Excuse me if there will be dups.
> 
> Hello there.
> 
> I need your help to solve some issues I met.
> 
> I use redhat-cluster (as part of Proxmox VE) in our virtualization
> environment.
> 
> I have several servers with SAN storage attached and CLVM managed
> volumes. In general it works.
> 
> Today I had to attach one more box to Proxmox instance and found
> blocking issue:
> 
> this node joined cluster, proxmosFS started and everything is ok with
> this host. But it does not have SAN connection, so I didn't start CLVM
> on it. And when I try to do some lvm-related work on other host I got
> "clvmd not running on node <node-without-clvm>"
> and locking failed.
> 
> Ok, I thought, and started CLVM on that host...
> 
> and locking still fails with 
> Error locking on node <node-without-clvm>: Volume group for uuid not
> found: <id>
> 
> 
> So my question is:
> 
> How to handle situation, when only part of cluster nodes has access to
> particular LUN, but need to run CLVM and use CLVM locking over it?
> 
> 

I think this is possible only with corosync driver which has commit from
Christine Caulfield dated 2013-09-23
(https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=431eda63cc0ebff7c62dacb313cabcffbda6573a).

In all other cases you have to run clvmd on all cluster nodes.

I may misread that commit, but I do not have any problems putting
pacemaker node to standby (clvmd is managed as a cluster resource) after
it, although it was hell to do that before: lvm is stuck until second
node in a two-node cluster returns back.




More information about the Linux-cluster mailing list