[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] [PATCH] RFC: qemu: add spice/virgl rendernode



Hi

----- Original Message -----
> On Mon, Feb 13, 2017 at 08:08:20AM -0500, Marc-André Lureau wrote:
> > Hi
> > 
> > ----- Original Message -----
> > > On Mon, Feb 13, 2017 at 07:19:04AM -0500, Marc-André Lureau wrote:
> > > > Hi
> > > > 
> > > > ----- Original Message -----
> > > > > On Mon, Feb 13, 2017 at 03:51:48PM +0400, marcandre lureau redhat com
> > > > > wrote:
> > > > > > From: Marc-André Lureau <marcandre lureau redhat com>
> > > > > > 
> > > > > > I am working on a WIP series to add QEMU Spice/virgl rendernode
> > > > > > option.
> > > > > > Since rendernodes are not stable across reboots, I propose that
> > > > > > QEMU
> > > > > > accepts also a PCI address (other bus types may be added in the
> > > > > > future).
> > > > > 
> > > > > Hmm, can you elaborate on this aspect ?  It feels like a parallel
> > > > > to saying NIC device names are not stable, so we should configure
> > > > > guests using PCI addresses instead of 'eth0', etc but we stuck with
> > > > > using NIC names in libvirt on the basis that you can create udev
> > > > > rules to ensure stable naming ?
> > > > > 
> > > > > So is there not a case to be made that if you want stable render
> > > > > device names when multiple NICs are present, then you should use
> > > > > udev to ensure a given device always maps to the same PCI dev.
> > > > 
> > > > I thought it was simpler to use a PCI address (do you expect users
> > > > to create udev rules for the GPUs?)
> > > 
> > > Well most users will only have 1 GPU so surely this won't be a problem
> > > in the common case.  Is it possible to get some stable naming rules into
> > > udev upstream though, so all distros get stable names by default
> > 
> > Optimus is getting more and more mainstream, see recent Fedora desktop
> > effort (fwiw I have a t460p nouveau/i915). I don't think a random user of
> > such hw/laptop should have to create udev rules.
> > 
> > I suppose systemd-udev could learn to create stable path with help
> > from src/udev/udev-builtin-path_id.c. I will work on it. However,
> > I have virt-manager code to lookup GPU infos/path, using libdrm,
> > and it is unlikely that it will work with the udev rules. So I'll
> > have to patch libdrm to support that too.
> 
> The generic goal is that Libvirt should be providing enough information
> for apps to be able to configure the guest, without resorting to side
> channels like libdrm. This is to ensure that apps can manage guests
> with no loss of functionality even when connected to a remote libvirt.
> e.g. libdrm isn't going to be able to enumerate remote GPUs, for
> virt-manager, so the neccessary info needs to be exposed by libvirt
> via its virNodeDevice APIs. We can already identify the PCI device
> and that its a GPU device, but I imagine we're not reporting any
> data about DRI render paths associated with the GPUs we report.
> So I think that's a gap we'd need to fill

Ah that makes sense, I'll probably have to drop my wip libdrm virt-manager code (although I'd like virt-manager to pick the current display GPU by default, since it is less likely to have issues, perhaps it will still be useful).

So I assume it's fine for libvirt to link with libdrm (it's not a really a graphical library, it's more a system-level library). I'll investigate the virNodeDevice changes.

thanks


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]