[vfio-users] KVM options and the effect on performance

Okky Hendriansyah okky at nostratech.com
Sat Nov 14 04:18:09 UTC 2015


Hi Ryan,



On October 29, 2015 at 23:51:52, Ryan Flagler (ryan.flagler at gmail.com) wrote:

Hey everyone, sorry if I'm doing this wrong, this is my first time using a mailing list. (Side note, if anyone has a better way to view historical emails than the web page, please let me know)

There are archives of this list that you can view at [1]. 
Please use bottom posting to improve readability of the mailing lists.

Which chipset emulation performs better and in what areas?
q35 vs i440fx

As far as I know the Q35 provides a more recent architecture of a modern PC, while the i440FX is a more mature option. 
The general rule of thumb for me is to use the i440FX by default, and use Q35 whenever i440FX doesn't support for the VM. 
Which in cases like virtualizing a Mac OS X machine it needs the Q35 chipset. 
I’ve tried Windows 10 on both i440FX and Q35 but see no significant performance gain if any exists.

What is the best way to pass a disk through to a VM to get the most performance?
.img file, /dev/sd[x] disk, virtio-scsi, etc.

It depends on how you want to manage the disk drives/images. A raw image (.img) tends to have better performance than a QEMU (qcow, qcow2) image, but QEMU image can do snapshots. Passing through a real disk (/dev/sd[x]) should have the same performance as the native, but it looses the flexibility to do migration. The benefit of using a real disk is that we can use that to boot the system natively, though needs to reinstall the drivers.

Another option is to use raw image on top of ZFS (or btrfs). It is the combination of raw performance, snapshotting capabilities of the underlying storage pool, compression, clones, thin provisioning, etc. Currently this is my approach on the storage side. I have 8  x 1 TB disks that I configure as a ZFS striped mirror pool (RAID10). On top of it I placed a dedicated ZFS dataset for each VM.

What is the best way to handle networking?
virtio-nic, hardware passthrough, bridge, nat, etc.

I think the bridge vs NAT really depends on your network topology that you want to expose not about the performance. I prefer my VMs to be the same network citizen class in my home network, so I always choose bridged networking. I’ll answer the VirtioNet part below your last post.

What is the best way to assign CPUs?
cpu pinning, assigning host cpu parameters, etc.

I can’t comment much on CPU pinning, since my host is only used by myself and I don’t experience a major performance breakdown on both the host and the VM when I’m using all my cores on the VM. If you’re not using NVIDIA GPU, the best CPU parameter should be:

-cpu host,kvm=off,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=12CHARID

That basically use the exact CPU spec on the host, hides the KVM CPUID exposed to NVIDIA driver, and apply several Hyper-V enlightenment performance. Multiplayer games (like Tera) seems to be more affected using these Hyper-V flags. The last Hyper-V flags needs a very recent QEMU from GIT repo or apply Alex’ patch from [2]. I think you can skip the hv_vendor_id flag if you’re using AMD GPU.

Does the BIOS have an effect on performance?
seabios vs OVMF?

As far as I know, the major difference between Seabios vs OVMF is comparing BIOS (legacy) vs UEFI (legacy free). The major downside of using Seabios if we use an Intel Graphics for the KVM host, is the VGA arbitration. See Alex’ explanation on [3].

CPU/Chipset IOMMU support - Not necessarily performance related, but stability?
e5 vs e3 vs i7 vs cpu architecture etc. What things are good to look for, what are bad? Etc.

Alex summarized this on [4].

On November 14, 2015 at 08:06:00, Ryan Flagler (ryan.flagler at gmail.com) wrote:

The 2nd command works, however passwing the mac address does not. It always defaults to the generic mac. Am I doing something wrong? If I try to use macaddr as the parameter I can an error that it isn't valid. Does anyone know if there is even a functional difference between these two?
I used the Python script qemu-mac-hasher from [5] to generate a consistent MAC address generation based on the name of a VM. A better performance than plain VirtioNet is by using vhost [6]. This is my current line when enabling the virtual NIC of my VM.

-netdev tap,vhost=on,id=brlan -device virtio-net-pci,mac=$(/usr/local/bin/qemu-mac-hasher $VM_NAME),netdev=brlan

Lastly, my system has 4 NIC cards. Would it be faster or more efficient to pass through the nic itself? From what I read, it the 2nd option above should be just as fast as a physical NIC.
Using a passed through physical NIC should have better performance and less CPU work in doing emulation. Since you have 4 NICs, I think you’d be better to passthrough that to the VM.



Please correct me if I’m wrong.

[1] https://www.redhat.com/archives/vfio-users
[2] http://www.spinics.net/lists/kvm/msg121742.html
[3] http://vfio.blogspot.co.id/2014/08/whats-deal-with-vga-arbitration.html
[4] http://vfio.blogspot.com/2015/10/intel-processors-with-acs-support.html
[5] https://wiki.archlinux.org/index.php/QEMU
[6] http://www.linux-kvm.org/page/UsingVhost


Best regards,

-- 
Okky Hendriansyah
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20151114/28fb11c9/attachment.htm>


More information about the vfio-users mailing list