Any help and ideas would be appreciated. Thanks!I feel like nothing I'm doing is especially tricky, and in my mind my setup SHOULD work, based on everything I've read. But honestly I've just run out of ideas on how to proceed with troubleshooting this.No other display adapters are present or installed. I believe a pass-through GPU cannot be a secondary display device, so I've made sure of this.As mentioned above, the kvm hidden xml line is added. The log shows the "-cpu host,kvm=off " option is used to boot the VM, and removing line from the XML causes blue screen on boot so I believe it's doing it's job.As mentioned above, all devices that should not be bound by the host have vfio-pci as their listed driver .I have checked in the Host's mobo BIOS settings that the default video card is the IGD. The host boots and uses the IGD without issue.The GPU should have sufficient power. My PSU is more than powerful enough. I hear the GPU fan spin up to full briefly when the Host powers on.I have also checked the following:No other devices appear with an issue in the Guest's Device Manager. No output is coming from the device to my screen plugged into the GPU card (obviously).Now I boot the Guest again, connect using the TightVNC server, and install the NVIDIA driver from the installer (I've tried different versions, standalone, through Windows Update etc.). The driver installs successfully and requests a reboot. After rebooting, the Device Manager shows GTX 670 with a yellow mark and the message "Windows has stopped this device because it has reported problems. (Code 43)".Next I shutdown and remove all unused devices, as described in Alex's guide. I remove the Display and Video devices (I will use the TightVNC server from here on to connect to the Guest). I also remove the USB redirect devices, the virtual NIC, etc. I add the pass-through USB controller and NVIDIA audio and video devices. Before booting again, I also edit the XML and add the required "<kvm><hidden state='on'/></kvm>" line in the features tag. Without this, the machine blue screens every time after the NVIDIA driver has been installed.Next, I reboot the Guest and only make the pass-through NIC available. It's drivers are installed correctly and I have access to the LAN it connects to. I am able to use that connection to copy vfio-drivers for Balloon driver installation, as well as the most up to date NVIDIA driver installer (but don't run it yet). I also install TightVNC server.I configure an i440FX machine using virt-manager. I set the firmware to UEFI x86_64. Initially I do not make any of the PCI devices available and install Windows onto the VM.I have also verified that the motherboard (MSI WORKSTATION C236A) groups the PCI devices correctly. The NIC and USB controllers are in their own IOMMU groups respectively, and the NVIDIA GPU has 3 items in its group. The root PCIe Controller (which I believe should NOT be configured to be passed through) and the Video and Audio controllers, both of which WILL be passed through.On the host, if I use "lshw" to look at my hardware devices, I can find the NIC, USB controller and both the GPU's video and audio controllers. They all correctly list their drivers as "vfio-pci". All the device ids are listed in the modprobe.d file and I believe the vfio-pci driver is proof that this is working, and the host is not binding these devices on boot.Hello,I've been following Alex Williamson's guide ( http://vfio.blogspot.ca/2015/05/vfio-gpu-how-to-series-part-3-host.html ) and I believe I have successfully configured things on the host.
I am currently trying to setup a Windows 10 VM on a Fedora 23 host with QEMU-KVM passing through a NIC, a USB controller, and a NVIDIA GPU (GTX 670). With my current setup, the NIC and USB Controller are both passed through and function without issue. The GPU driver gives the message "Windows has stopped this device because it has reported problems. (Code 43)".
vfio-users mailing list
vfio-users redhat com