[vfio-users] KVM options and the effect on performance

Dan Ziemba zman0900 at gmail.com
Fri Oct 30 04:14:49 UTC 2015


I'll also throw in the scripting I used to use before switching to
libvirt a month or so back.  The latest version is with the default
i440 machine, but if you look back in the history some you can see how
I was using q35 before.

https://github.com/zman0900/qemu-vifo

-----Original Message-----
From: Mark Weiman <mark.weiman at markzz.com>
To: Ryan Flagler <ryan.flagler at gmail.com>, vfio-users at redhat.com
Subject: Re: [vfio-users] KVM options and the effect on performance
Date: Fri, 30 Oct 2015 00:00:21 -0400

To be honest, I have found little to no real noticeable difference
between many of them.  My VMs usually use a qcow2 image that are
mounted via virtio.

As for a CPU, I use an i7-4970K and it works beautifully.  It really
boils down to when you choose your hardware whether or not you've done
the research beforehand so you can have a good time rather than
fighting it.  If I were to build my main rig again, I would have looked
closer into the motherboard so I wouldn't have to patch my kernel,
although it really is not a problem to do with that CPU (I also provide
a slightly modified version of Dan Ziemba's PKGBUILD [1] that includes
the i915 and acs patch from my Arch Linux repository [2]).

As for your Wiki idea, since this all is open source software, there is
nothing preventing you from contributing to documentation on this.  The
Arch Linux Wiki does provide a lot of information on how to do all of
this [3] and can guide you even if you aren't using Arch Linux.  Just
change the Arch specific bits to whatever distribution you use.

If it helps, this is the script I use when I run my Windows 10 VM
[4].  It's really sloppy, but it seems to work for me.

As for using btrfs to store images, I use it to store my images and I
have had no issue.  It just has to be pointed out that btrfs is still
under development, so you should just put that under consideration.

Didn't want to leave ya hangin,
Mark Weiman

[1] https://aur.archlinux.org/packages/linux-vfio-lts
[2] https://wiki.archlinux.org/index.php/Unofficial_user_repositories#m
arkzz
[3] https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
[4] http://info.markzz.com/kvm-start.sh

On Thu, 2015-10-29 at 16:50 +0000, Ryan Flagler wrote:
> Hey everyone, sorry if I'm doing this wrong, this is my first time
> using a mailing list. (Side note, if anyone has a better way to view
> historical emails than the web page, please let me know)
> 
> I've been tinkering with KVM for a bit on my system and had some
> general performance questions to ask. I see a lot of people doing VGA
> passthrough using the q35 chipset instead of the i440FX chipset. I've
> personally had no luck getting q35 to be stable for me, and I've seen
> some people say it's not worth the headache. But the big question to
> me, is there a performance difference with CPU, VGA, memory etc.
> using q35? I'm not looking for specifics, but I'm curious about the
> following qemu parameters.
> 
> Which chipset emulation performs better and in what areas?
> q35 vs i440fx
> 
> What is the best way to pass a disk through to a VM to get the most
> performance?
> .img file, /dev/sd[x] disk, virtio-scsi, etc.
> 
> What is the best way to handle networking?
> virtio-nic, hardware passthrough, bridge, nat, etc.
> 
> What is the best way to assign CPUs?
> cpu pinning, assigning host cpu parameters, etc.
> 
> Does the BIOS have an effect on performance?
> seabios vs OVMF?
> 
> CPU/Chipset IOMMU support - Not necessarily performance related, but
> stability?
> e5 vs e3 vs i7 vs cpu architecture etc. What things are good to look
> for, what are bad? Etc.
> 
> What would be interesting, especially as a new KVM/Qemu user, would
> be to see an entire wiki/performance page with examples and
> specifics. It's hard to filter through all the various pages of VM
> options where people don't really explain why they're doing something
> the way they are.
> 
> Examples:
> 
> Disk Options
> Best
> -device virtio-scsi-pci,id=scsi
> -drive file=/dev/sd[x],id=disk,format=ls raw,if=none -device scsi-
> hd,drive=disk
> 
> Better
> -device virtio-scsi-pci,id=scsi
> -drive file=/opt/[vm_name].img,id=disk,format=raw -device scsi-
> hd,drive=disk 
> 
> Good
> -drive file=/opt/[vm_name].img,id=disk,format=raw -device ide-
> hd,bus=ide.0,drive=disk
> 
> And maybe an overall explanation of why one is better over the other.
> I know this may not exist and I'm not asking a single person to do
> the leg work, but being new to this, it's hard to focus on the pieces
> that matter vs just using the first thing I find that works. If there
> is a "right" place to start something like this I'd be happy to setup
> a generic page where more experienced people could easily contribute.
> 
> Thanks - Ryan
>      
> 
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
_______________________________________________
vfio-users mailing list
vfio-users at redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: This is a digitally signed message part
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20151030/73a57918/attachment.sig>


More information about the vfio-users mailing list