[Linux-cluster] unable to live migrate a vm in rh el 6: Migration unexpectedly failed

Gianluca Cecchi gianluca.cecchi at gmail.com
Thu Mar 10 15:18:37 UTC 2011


On Wed, Mar 9, 2011 at 9:47 AM, Gianluca Cecchi
<gianluca.cecchi at gmail.com> wrote:
[snip]
> Or something related with firewall perhaps.
> Can I stop firewall at all and have libvirtd working at the same time
> to test ...?
> I know libvirtd puts some iptables rules itself..
>
> Gianluca
>

OK. It was indeed a problem related to iptables rules.
After adding at both ends this rule about intracluster network tcp
ports (.31 for the other node) I get live migration working ok using
clusvcadm command

iptables -t filter -I INPUT 17 -s 192.168.16.32/32 -p tcp -m multiport
--dports 49152:49215 -j ACCEPT

I'm going to put it in /etc/sysconfig/iptables in the middle of these two:
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited

I can also simulate the clusvcadm command with virsh (after freezing
the resource) with
virsh migrate --live exorapr1 qemu+ssh://intrarhev2/system tcp:intrarhev2

otherwise the ssh connection is tunneled through hostname in
connection string, but data exchange happens anyway through the public
lan (or what hostname resolves to, I suppose).

BTW: I noticed that when you freeze a vm resource you don't get the
[Z] notification at the right side of the corresponding line, as it
happens with standard services...
Is this intentional or could I post a bugzilla for it?
For a service, when frozen:
  service:MYSRV
intrarhev2                                                   started
 [Z]

[root at rhev2 ]# clusvcadm -Z vm:exorapr1
Local machine freezing vm:exorapr1...Success

[root at rhev2 ]# clustat | grep orapr1
 vm:exorapr1                    intrarhev1                     started

Cheers,
Gianluca




More information about the Linux-cluster mailing list