[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] Qemu/KVM is 3x slower under libvirt

On 09/28/11 09:51, Daniel P. Berrange wrote:
On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:
I repost this, this time by also including the libvirt mailing list.

Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
qemu-kvm are vanilla and compiled by hand as described below.

My original message follows:

This is really strange.

I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
compiled by me.

I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash
console, it boots in 8 seconds (it's a bare ubuntu with no
graphics), while if I boot it under virsh (libvirt) it boots in
20-22 seconds. This is the time from after Grub to the login prompt,
or from after Grub to the ssh-server up.

I was almost able to replicate the whole libvirt command line on the
bash console, and it still goes almost 3x faster when launched from
bash than with virsh start vmname. The part I wasn't able to
replicate is the -netdev part because I still haven't understood the
semantics of it.
-netdev is just an alternative way of setting up networking that
avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
use more efficient codepaths for networking, which should improve
the network performance.

This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
-boot order=dc,menu=on -drive file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
-usb -vnc -vga cirrus -device

This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by

     virsh qemu-monitor-command vmname1 'info kvm'

kvm support: enabled

I think I would see a higher impact if it was KVM not enabled.

Which was taken from libvirt's command line. The only modifications
I did to the original libvirt commandline (seen with ps aux) were:

- Removed -S
Fine, has no effect on performance.

- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
and manual bridging of the tap0 interface.
You could have equivalently used

  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
  -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3

It's this! It's this!! (thanks for the line)

It raises boot time by 10-13 seconds

But now I don't know where to look.... During boot there is a pause usually between /scripts/init-bottom (Ubuntu 11.04 guest) and the appearance of login prompt, however that is not really meaningful because there is probably much background activity going on there, with init etc. which don't display messages

init-bottom does just this

#!/bin/sh -e
# initramfs init-bottom script for udev


# Output pre-requisites
        echo "$PREREQ"

case "$1" in
        exit 0

# Stop udevd, we'll miss a few events while we run init, but we catch up
pkill udevd

# Move /dev to the real filesystem
mount -n -o move /dev ${rootmnt}/dev

It doesn't look like it should take time to execute.
So there is probably some other background activity going on... and that is slower, but I don't know what that is.

Another thing that can be noticed is that the dmesg message:

[   13.290173] eth0: no IPv6 routers present

(which is also the last message)

happens on average 1 (one) second earlier in the fast case (-net) than in the slow case (-netdev)

That said, I don't expect this has anything todo with the performance
since booting a guest rarely involves much network I/O unless you're
doing something odd like NFS-root / iSCSI-root.

No there is nothing like that. No network disks or nfs.

I had ntpdate, but I removed that and it changed nothing.

Firstly I had thought that this could be fault of the VNC: I have
compiled qemu-kvm with no separate vnc thread. I thought that
libvirt might have connected to the vnc server at all times and this
could have slowed down the whole VM.
But then I also tried connecting vith vncviewer to the KVM machine
launched directly from bash, and the speed of it didn't change. So
no, it doesn't seem to be that.
Yeah, I have never seen VNC be responsible for the kind of slowdown
you describe.

No it's not that, now I am using SDL and commandline in both cases (fast and slow)

BTW: is the slowdown of the VM on "no separate vnc thread" only in
effect when somebody is actually connected to VNC, or always?
Probably, but again I dont think it is likely to be relevant here.

"Probably" always, or "probably" only when somebody is connected?

Also, note that the time difference is not visible in dmesg once the
machine has booted. So it's not a slowdown in detecting devices.
Devices are always detected within the first 3 seconds, according to
dmesg, at 3.6 seconds the first ext4 mount begins. It seems to be
really the OS boot that is slow... it seems an hard disk performance

There are a couple of things that would be different between running the
VM directly, vs via libvirt.

  - Security drivers - SELinux/AppArmour

No selinux on the host or guests

  - CGroups

If it is was AppArmour causing this slowdown I don't think you would have
been the first person to complain, so lets ignore that. Which leaves
cgroups as a likely culprit. Do a

   grep cgroup /proc/mounts

No cgroups mounted on the host

if any of them are mounted, then for each cgroups mount in turn,

  - Umount the cgroup
  - Restart libvirtd
  - Test your guest boot performance

Thanks for your help!

Do you have an idea of what to test now?

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]