[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[fedora-virt] fence_xvm and qemu/kvm problem



Hello,
I'm testing this:
host = Fedora 11 x86_64 using Qemu/Kvm as virtualization
specs on sw versions:
libvirt-0.6.2-12.fc11.x86_64
qemu-common-0.10.5-3.fc11.x86_64
qemu-kvm-0.10.5-3.fc11.x86_64
qemu-system-x86-0.10.5-3.fc11.x86_64
qemu-img-0.10.5-3.fc11.x86_64
cman-3.0.0-15.rc1.fc11.x86_64

two guests = CentOS 5.3 x86_64 with cluster suite installed and
configured with almost latest packages
specs on sw versions:
cman-2.0.98-1.el5_3.1
openais-0.80.3-22.el5_3.4
rgmanager-2.0.46-1.el5.centos.3
kernel-2.6.18-128.1.10.el5

the cluster itself works well in preliminary tests (I have quorum disk
and one heuristic pinging to the qemu/kvm host as a gateway):
poweroff and automatic relocation, manual relocation with clusvcadm, HALVM, ecc.

I registered one problem though, that could lead to data corruption or
other unintended effects in my opinion.
I configured fence_xvm agent on the guests.
At the moment the host is a standalone one.
I read (bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=362351)
that I can manage this configuration using on the host side the
standalone command
fence_xvmd -LX

(installing cman and dependencies, but actually not running it as a service)

My guests use virbr0 as the prod lan and virbr1 as rhe intracluster
one, and I noticed that I had to issue actually this command to keep
alive the daemon on host side:
fence_xvmd -dLX -i ipv4 -I virbr0 -U qemu:///system
(probably the key part is the -I virbr0 option)
Otherwise I get:
Jul 15 11:54:04 virtfed xvmd[606]: Could not set up multicast listen socket

Configuring the fence daemon, at first I was wrong configuring the
"domain=" part

                <clusternode name="node1" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device domain="XXXXXX" name="xvm"/>
                                </method>
                        </fence>

In fact initially I put in the node name (node1), instead of the
domain name of the guest inside the qemu/kvm HV.....

This error gave me the opportunity to detect a wrong behaviour in my
combination.

If I put node name and issue this command from node2:
fence_xvm -H node1

I get:
Remote: operation was successful

but actually the guest is not fenced at all.

The qemu/kvm host /var/log/messages contains:
Jul 15 12:17:21 virtfed libvirtd: 12:17:21.566: error : Domain not found
Jul 15 12:17:21 virtfed xvmd[658]: Rebooting domain node1...
Jul 15 12:17:21 virtfed libvirtd: 12:17:21.574: error : Domain not found
Jul 15 12:17:21 virtfed xvmd[658]: Failed to connect to caller: Bad
file descriptor

Instead, when using the correct domain name part:
fence_xvm -H centos53
I again get
Remote: operation was successful
and the guest is really fenced (rebooted by default)

In qemu/kvm host logs I get now:
Jul 15 12:18:31 virtfed xvmd[658]: Rebooting domain centos53...
Jul 15 12:18:31 virtfed kernel: virbr0: port 2(vnet2) entering disabled state
Jul 15 12:18:31 virtfed kernel: device vnet2 left promiscuous mode
Jul 15 12:18:31 virtfed kernel: virbr0: port 2(vnet2) entering disabled state
Jul 15 12:18:31 virtfed kernel: virbr1: port 2(vnet3) entering disabled state
Jul 15 12:18:31 virtfed kernel: device vnet3 left promiscuous mode
Jul 15 12:18:31 virtfed kernel: virbr1: port 2(vnet3) entering disabled state
Jul 15 12:18:32 virtfed libvirtd: 12:18:32.245: error : operation
failed: domain 'centos53' is already defined
Jul 15 12:18:32 virtfed kernel: device vnet2 entered promiscuous mode
Jul 15 12:18:32 virtfed kernel: virbr0: topology change detected, propagating
Jul 15 12:18:32 virtfed kernel: virbr0: port 2(vnet2) entering forwarding state
Jul 15 12:18:32 virtfed kernel: device vnet3 entered promiscuous mode
Jul 15 12:18:32 virtfed kernel: virbr1: topology change detected, propagating
Jul 15 12:18:32 virtfed kernel: virbr1: port 2(vnet3) entering forwarding state
Jul 15 12:18:33 virtfed xvmd[658]: Failed to connect to caller: Bad
file descriptor

Is it my configuration (F11 host with cman part based on 3.0 and
centos53 guests with 2.98 cman part) supposed to work?
In this case where to bugzilla?

Thanks in advance,
Gianluca


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]