[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [vfio-users] [OVMF+VIFO+QEMU] Large VM RAM allocation = Long Boot Times

On 15 Nov 2018, at 06:59, A Z <adam zegelin com> wrote:

This is an issue that involves a combination of different software packages, so my apologies in advance if this is the wrong list to post on.

I'm experiencing terrible boot times when I assign a large amount of RAM to a VM when used in combination with VIFO/PCI-passthrough.

On a VM with a Nvidia GTX 970 + USB controller and 24GiB of RAM assigned, the time to the TianoCore splash screen is ~5 minutes. It's then ~30 seconds before Windows 10 begins to boot (spinning dots). During this time, the QEMU CPU core threads are 100% busy.

This sounds quite like a problem I used to have in the past with certain versions of OVMF and the Linux kernel. If I remember correctly, some memory ranges would get marked as non-cacheable, which resulted in the terrible slowdown you describe. The resolution back then was to stick to older versions of both the firmware and the kernel.

I still keep around an older version of OVMF that I used until recently - edk2.git-ovmf-x64-0-20160714.b1992.gfc3005.noarch.rpm. You could download the RPM here and try if it works for you:


Recent QEMU versions started complaining about firmware incompatibilities, so I tried the latest RPM from Kraxel (https://www.kraxel.org/repos/jenkins/edk2/) and it works just fine. The host system is Arch Linux with the latest Arch kernel and QEMU.


According to `perf`, the QMU CPU core threads are spending most of their time waiting on a spinlock over kvm->mmu_lock that's created by kvm_zap_gfn_range.

I'm fairly certain that ~1 year ago (if not longer) the same configuration didn't take this long to boot.

vfio-users mailing list
vfio-users redhat com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]