[Linux-cluster] vm.sh with and without virsh

Edson Marquezani Filho edsonmarquezani at gmail.com
Mon Oct 5 19:43:01 UTC 2009


On Mon, Oct 5, 2009 at 11:43, brem belguebli <brem.belguebli at gmail.com> wrote:
> Hi,
>
> To give an example of setup that did "surprisingly" work like a charm
> out of the box  (RHEL 5.4 KVM)
>
> -  3 nodes cluster (RHEL 5.4 x86_64)
> -  2  50 GB SAN LUN's (partitionned p1= 100 MB, p2=49.9 GB)
>    /dev/mpath/mpath4 (mpath4p1, mpath4p2)
>    /dev/mpath/mpath5 (mpath5p1, mpath5p2)
> -  3 mirrored LV's  lvolVM1, lvolVM2 and lvolVM3 on mpath4p2/mpath5p2
> and mpath4p1 as mirrorlog

I don't know about this mirroring feature. How does it work and why do
you use it ?

> - cmirror to maintain mirror log across the cluster
> LV's are activated "shared", ie active on all nodes,  no exclusive
> activation being used.
>
> Each VM using a LV as virtual disk device (VM XML conf file):
>
>  <disk type='block' device='disk'>
>      <source dev='/dev/VMVG/lvolVM1'/> <-- for VM1
>
> Each VM being defined in the cluster.conf with no hierarchical
> dependency on anything:
>
> <rm>
>               <vm autostart="0" name="testVM1" recovery="restart"
> use_virsh="1"/>
>               <vm autostart="0" name="testVM2" recovery="restart"
> use_virsh="1"/>
>               <vm autostart="0" name="testVM3" recovery="restart"
> use_virsh="1"/>
> </rm>
>
> Failover and live migration work fine

I tought that live migration without any access control on LVs would
cause some corruption on file systems. But, I guess that even without
exclusive activation, I should use CLVM, should I ?

> VM's must be defined on all nodes (after creation on one node, copy
> the VM xml conf file to the other nodes and issue a virsh define
> /Path/to/the/xml file)

I'm not using virsh because I have just learned the old-school way to
control VMs with xm. When I knew about that virsh tool, I had already
modified config files manually.
Would be better that I recreate all of them using libvirt infrastructure?

> The only thing that may look unsecure is the fact that the LV's are
> active on all the nodes, a problem could happen if someone manually
> started the VM's on some nodes while already active on another one.

That's the point who made me ask for help here sometime ago, and what
more concerns me.
Rafael Miranda told me about his lvm-cluster resource script. So, I
developed a simple script, that performs start, stop, and status
operations. For stop, it saves the VM to a stat file. For start, it
either restores the VM if there is a stat file for ir, or creates it
if there is not. Status just return sucess if the VM appears on xm
list, or failure if not. Stats files should be saved on a GFS
directory, mounted on both nodes.

Then, I configure each VM as a service, with its lvm-cluster and
script resources.

So, relocating a "vm service" will look like a semi-live migration, if
I can call like this. =) Actually, it saves the VM in shared directory
and restores it on the other node in a little time, without reseting
it. It will look just like the VM had stopped for a little time and
came back.

But now I'm thinking if I have tried to reinvent the whell. =)

> I'll try the setup with exclusive activation and check if live
> migration still works (I doubt that).
>
> Brem
>

What do you think about this?

Thank you.




More information about the Linux-cluster mailing list