[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [linux-lvm] Why do lvcreate with clvmd insist on VG being available on all nodes?



Dne 14.11.2012 16:16, Jacek Konieczny napsal(a):
Hello,

I am building a system where I use clustered LVM on a DRBD to provide
shared block devices in a cluster. And I don't quite like and quite not
understand some behaviour.

Currently I have two nodes in the cluster, running: Corosync, DLM,
clvmd, DRBD, Pacemaker and my service.

Everything works fine when both nodes are up. When I put one to standby
with 'crm node node1 standby' (which, among others, stops the DRBD) the
other note is not fully functional.

If I leave DLM and CLVMD running on the inactive node, then:

lvchange -aey shared_vg/vol_name
lvchange -aen shared_vg/vol_name

work properly, as I would expect (make the volume available/unavailable
on the node). But an attempt to create a new volume:

lvcreate -n new_volume -L 1M shared_vg

fails with:

Error locking on node 1: Volume group for uuid not found: Hlk5NeaVF0qhDF20RBq61EZaIj5yyUJgGyMo5AQcLfZpJS0DZUcgj7QMd3QPWICL



Haven't really tried to understand what are you trying to achieve,
but if you want to have node being activated only on one cluster node,
you may easily use    lvcreate -aey  option.

If you are using default clustered operation - it's not surprising,
the operation is refused if other nodes are not responding.


Indeed, the VG is not available at the standby node at that moment. But,
as it is not available there, I see no point in locking it there.

Well - you would need to write your own type of locking with support
of 'standby'  currently clvmd doesn't work with such state (and it's not
quite clear to me how it actually should even work).
So far either node is in cluster or is fenced.


Is there some real, important reason to block lvcreate in such case?

As long as you would use exclusive activation for lvcreate it should work.
(or maybe 'local' - just test - but since you are trying to use unsupported
operational mode - you need to take responsibility for the results)



When clvmd is stopped on the inactive node and 'clvmd -S' has been run
on the active node, then both 'lvchange' and 'lvcreate' work as
expected, but that doesn't look like a graceful switch-over. And another
'clvmd -S' stopped clvmd all together (this seems like a bug to me)

And one more thing bothers me… my system would be very scalable to many
nodes, where only two share active storage (when using DRBD). But this
won't work if LVM would refuse some operations when any VG is not
available on all nodes.

Obviously using clustered VG in non-clustered environment isn't smart plan.
What you could do - is to   disable clustering support on VG

vgchange -cn <vg> --config 'global {locking_type = 0}'


Note - you may always go around any locking problem with the above config option - just do not report then problems with broken disk content and badly
activated volumes on cluster nodes.

Zdenek


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]