[vfio-users] VMs slow to a crawl with physical hardware attached

Curlen M curl2k1 at gmail.com
Tue Jan 19 15:59:04 UTC 2016


For the past month, I've been trying to get passthrough to work smoothly.
Needless to say, I haven't been successful.  I've tried various distros,
and even rebuilt/upgraded my rig from Z97 to X99 (tbh, I've been wanting to
do that anyway  :-D) thinking things would go smoother.  No dice.  Here's
what I'm currently using:

ASRock X99 Professional (the Gigabyte and Asus I had before these were
worse)
i7 5820k
32GB RAM
Lots-o-drives
660Ti
680
Fury X

The setup has both VT-x and VT-d enabled and is booting with the CSM
disabled.  All 3 GPUs are starting in UEFI GOP mode.


The initial plan was to use the 660Ti for the host and assign vfio to the
680 for a SteamOS instance and the Fury for a W10 instance.  Haven't had
any issues assigning these.  So I'm good here.

I'm able to build the VMs using Virt-Manager and get the OS installed.  But
as soon as I shutdown the VM and assign a video card to either VM and start
it up.  CPU utilization on the host shoots through the roof and the OS
slows to a crawl (noticed this by starting top on accident).  I'm talking
an hour or more to get to the windows 10 desktop.

I've tried many different configurations (including Seabios and Gerd's
OVMF).  Including removing the Fury from the system and attempting to use
the 680 in W10 and vice versa.  I've tried ditching Virt-Manager and
libvirtd and using Qemu start scripts.


I've since dropped the 660Ti out of the mix and have been attempting to use
the 680 as the host with the Fury for W10.  Same results.


Anyone have any tips?  Ideas?  Anything?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160119/f4b9f117/attachment.htm>


More information about the vfio-users mailing list