[vfio-users] RE : "No signal" on dual Nvidia setup

Will Marler will at wmarler.com
Mon Jan 18 21:23:31 UTC 2016


Well it looks to me like your GeForce GTX 970 is correctly being claimed
by vfio-pci, so I would expect that if you passed it to a VM, the VM should
be able to see it. I'd suggest removing <timer name='hypervclock'
present='yes'/> from your XML file, and accessing it via VNC. You should be
able to go into the Windows device manager and see the video card there
(where I actually think you'll see an Error 43 currently, because of the
hypervclock line).

On Sun, Jan 17, 2016 at 1:36 AM, Nicolas Roy-Renaud <
nicolas.roy-renaud.1 at ens.etsmtl.ca> wrote:

> Here the output of some of the more common diagnosis commands around. I'm
> also joining my libvirt XML and the ROM I'm using on my guest GPU.
>
> [root at OCCAM user]# cat /proc/cmdline
> initrd=\intel-ucode.img initrd=\initramfs-linux.img
> root=PARTUUID=facab1af-8406-4245-881d-3bfca920f0cd rw intel_iommu=on
> iommu=pt rd.driver.pre=vfio-pci video=efifb:off vfio-pci.disable_vga=1
>
> [root at OCCAM user]# cat /etc/modprobe.d/*
> #options kvm ignore_msrs=1
> options vfio-pci ids=10de:13c2,10de:0fbb disable_vga=1
>
> [root at OCCAM user]# find /sys/kernel/iommu_groups/ -type l
> /sys/kernel/iommu_groups/0/devices/0000:00:00.0
> /sys/kernel/iommu_groups/1/devices/0000:00:01.0
> /sys/kernel/iommu_groups/1/devices/0000:01:00.0
> /sys/kernel/iommu_groups/1/devices/0000:01:00.1
> /sys/kernel/iommu_groups/2/devices/0000:00:14.0
> /sys/kernel/iommu_groups/3/devices/0000:00:16.0
> /sys/kernel/iommu_groups/4/devices/0000:00:1a.0
> /sys/kernel/iommu_groups/5/devices/0000:00:1b.0
> /sys/kernel/iommu_groups/6/devices/0000:00:1c.0
> /sys/kernel/iommu_groups/7/devices/0000:00:1c.1
> /sys/kernel/iommu_groups/8/devices/0000:00:1c.3
> /sys/kernel/iommu_groups/8/devices/0000:04:00.0
> /sys/kernel/iommu_groups/9/devices/0000:00:1c.4
> /sys/kernel/iommu_groups/10/devices/0000:00:1d.0
> /sys/kernel/iommu_groups/11/devices/0000:00:1f.0
> /sys/kernel/iommu_groups/11/devices/0000:00:1f.2
> /sys/kernel/iommu_groups/11/devices/0000:00:1f.3
> /sys/kernel/iommu_groups/12/devices/0000:03:00.0
> /sys/kernel/iommu_groups/13/devices/0000:06:00.0
> /sys/kernel/iommu_groups/13/devices/0000:06:00.1
>
>
> [root at OCCAM user]# lspci -nnk
> 00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/Ivy Bridge
> DRAM Controller [8086:0158] (rev 09)
>         Subsystem: Micro-Star International Co., Ltd. [MSI] Device
> [1462:7758]
>         Kernel modules: ie31200_edac
> 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core
> processor PCI Express Root Port [8086:0151] (rev 09)
>         Kernel driver in use: pcieport
>         Kernel modules: shpchp
> ==================snip========================
> 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204
> [GeForce GTX 970] [10de:13c2] (rev a1)
>         Subsystem: ASUSTeK Computer Inc. Device [1043:8508]
>         Kernel driver in use: vfio-pci
>         Kernel modules: nouveau
> 01:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition
> Audio Controller [10de:0fbb] (rev a1)
>         Subsystem: ASUSTeK Computer Inc. Device [1043:8508]
>         Kernel driver in use: vfio-pci
>         Kernel modules: snd_hda_intel
> 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd.
> RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev
> 06)
>         Subsystem: Micro-Star International Co., Ltd. [MSI] Device
> [1462:7758]
>         Kernel driver in use: r8169
>         Kernel modules: r8169
> 04:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to
> PCI Bridge [1b21:1080] (rev 01)
>         Kernel modules: shpchp
> 06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT218
> [GeForce G210] [10de:0a60] (rev a2)
>         Subsystem: PC Partner Limited / Sapphire Technology Device
> [174b:2180]
>         Kernel driver in use: nouveau
>         Kernel modules: nouveau
> 06:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio
> Controller [10de:0be3] (rev a1)
>         Subsystem: PC Partner Limited / Sapphire Technology Device
> [174b:2180]
>         Kernel driver in use: snd_hda_intel
>         Kernel modules: snd_hda_intel
>
>
> [root at OCCAM user]# dmesg -w #When starting a VM
> [ 4378.349041] device vnet0 entered promiscuous mode
> [ 4378.362333] virbr0: port 2(vnet0) entered listening state
> [ 4378.362343] virbr0: port 2(vnet0) entered listening state
> [ 4379.134931] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x1e at 0x258
> [ 4379.134938] vfio_ecap_init: 0000:01:00.0 hiding ecap 0x19 at 0x900
> [ 4380.367958] virbr0: port 2(vnet0) entered learning state
> [ 4382.371677] virbr0: topology change detected, propagating
> [ 4382.371685] virbr0: port 2(vnet0) entered forwarding state
> [ 4384.215276] kvm: zapping shadow pages for mmio generation wraparound
> [ 4384.219678] kvm: zapping shadow pages for mmio generation wraparound
> [ 4396.767174] kvm [1661]: vcpu2 unhandled rdmsr: 0x641
>
>
> ________________________________________
> De : vfio-users-bounces at redhat.com [vfio-users-bounces at redhat.com] de la
> part de Nicolas Roy-Renaud [nicolas.roy-renaud.1 at ens.etsmtl.ca]
> Envoyé : 17 janvier 2016 03:03
> À : vfio-users at redhat.com
> Objet : [vfio-users] "No signal" on dual Nvidia setup
>
>
> For the last few days, now, I've been trying to get a gpu passthrough to
> work on my computer, but I haven't been able to get the VM to output
> anything on my passthrough monitor at all since I started (I've had to
> either rely on a QXL adapteror just boot on the drive bare metal).
> Here's my situation :
>
> I'm using 2 dedicated NVIDIA GPUs. One is a 970 GTX from Asus which I
> want to pass through (PCI:01:00.0; IOMMU group 1) and the other is an
> old OEM 210 GT which I'm going to be using to run the host (PCI:06:00.0;
> IOMMU group 14). Since the 970 is set at my primary GPU, it is
> responsible for displaying my bios and bootloader until linux boots,
> where I have its framebuffer disabled and vfio-pci latch onto it. The
> 210 GT, however, is still managed by the nouveau driver. Note that from
> the moment linux starts up until I try running a VM attempting to access
> the passthrough, the framebuffer for the guest card remains untouched
> and keeps showing my bootloader (systemd-boot). It gets flushed as soon
> as I start my Windows VM, and from there on receives no signal.
>
> My CPU (Xeon E3 1230v2) and motherboard (MSI Z77-G43) both seem to be
> compatible with IOMMU. I've looked into the GPU rom, which does appear
> to support EFI according to rom_parser, even if TechPowerUp says is
> shouldn't, and whether I try injecting one or using the embarked one
> doesn't change the final result (although I do get "Invalid ROM content"
> warnings if do the later). Trying to boot the guest with an extra QXL
> video adapter forces Windows to disable the guest card, and reenabling
> it causes an immediate blue screen (and nothing on the punitor plugged
> on the guest card). No error 43 there (yet), although I am using qemu
> 2.5 with the cpu's hv_vendor_id blanked out. x-vga flat out refuses to
> work, as my guest GPU doesn't support it according to QEMU.
>
> I'm not really sure where to go from there, so I thought I'd at least
> try my luck here before giving up. Actual logs will follow.
>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>
>
>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160118/c7ec63c0/attachment.htm>


More information about the vfio-users mailing list