[libvirt] [PATCH v2] qemu: Honor the orginal PCI dev properties when reattaching

Osier Yang jyang at redhat.com
Thu Oct 13 08:30:22 UTC 2011


BZ# https://bugzilla.redhat.com/show_bug.cgi?id=736214

The problem is caused by the original info of domain's PCI dev is
maintained by qemu_driver->activePciHostdevs list, (E.g. dev->reprobe,
which stands for whether need to reprobe driver for the dev when do
reattachment). The fields (dev->reprobe, dev->unbind_from_stub, and
dev->remove_slot) are initialized properly when preparing the PCI
device for managed attachment. However, when do reattachment, it
construct a complete new "pciDevice" without honoring the original
dev info, and thus the dev won't get the original driver or can get
other problem.

This patch is to fix the problem by get the dev from list
driver->activePciHostdevs if it's in the list, though it's unlikely
the dev to be reattached is not in the list, as any PCI dev is added
to the list when do preparation (qemuPrepareHostdevPCIDevices),
the patch doesn't completely desert the old design so the device
can be reattached even if it's not in list driver->activePciHostdevs.

Tested with following 3 scenarios:
  * the PCI was bound to some driver not pci-stub before attaching

    result: the device will be bound to the original driver

  * the PCI was bound to pci-stub before attaching

    result: no driver reprobing, and still bound to pci-stub

  * The PCI was not bound to any driver

    result: no driver reproing, and still not bound to any driver.

v1 ~ v2:
  * Fix memory leak.
  * Add comments to explain why there are too loops to reattach
    devs.
---
 src/qemu/qemu_hostdev.c |   23 +++++++++++++++++++++--
 1 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_hostdev.c b/src/qemu/qemu_hostdev.c
index 6f77717..4c6555d 100644
--- a/src/qemu/qemu_hostdev.c
+++ b/src/qemu/qemu_hostdev.c
@@ -262,6 +262,7 @@ void qemuDomainReAttachHostdevDevices(struct qemud_driver *driver,
                                       int nhostdevs)
 {
     pciDeviceList *pcidevs;
+    pciDeviceList *activePciDevs = NULL;
     int i;
 
     if (!(pcidevs = qemuGetPciHostDeviceList(hostdevs, nhostdevs))) {
@@ -272,12 +273,23 @@ void qemuDomainReAttachHostdevDevices(struct qemud_driver *driver,
         return;
     }
 
+    if (!(activePciDevs = pciDeviceListNew()))
+        goto cleanup;
+
     /* Again 4 loops; mark all devices as inactive before reset
      * them and reset all the devices before re-attach */
-
     for (i = 0; i < pciDeviceListCount(pcidevs); i++) {
         pciDevice *dev = pciDeviceListGet(pcidevs, i);
-        pciDeviceListDel(driver->activePciHostdevs, dev);
+        pciDevice *activeDev = NULL;
+        activeDev = pciDeviceListSteal(driver->activePciHostdevs, dev);
+
+        if (activeDev) {
+            pciDeviceListDel(pcidevs, dev);
+            if (pciDeviceListAdd(activePciDevs, activeDev) < 0) {
+                pciFreeDevice(activeDev);
+                goto cleanup;
+            }
+        }
     }
 
     for (i = 0; i < pciDeviceListCount(pcidevs); i++) {
@@ -290,11 +302,18 @@ void qemuDomainReAttachHostdevDevices(struct qemud_driver *driver,
         }
     }
 
     /* Reattach active devs, honoring the original dev info
      * from driver->activePciHostdevs.
      */
+    for (i = 0; i < pciDeviceListCount(activePciDevs); i++) {
+        pciDevice *dev = pciDeviceListGet(activePciDevs, i);
+        qemuReattachPciDevice(dev, driver);
+    }
+

     /* Reattach inactive devs, though this is unlikely to happen,
      * as all the domain PCI devices are added to list
      * driver->activePciHostdevs and marked as active when do
      * preparation (qemuPrepareHostdevPCIDevices).
      */
     for (i = 0; i < pciDeviceListCount(pcidevs); i++) {
         pciDevice *dev = pciDeviceListGet(pcidevs, i);
         qemuReattachPciDevice(dev, driver);
     }
 
+cleanup:
+    pciDeviceListFree(activePciDevs);
     pciDeviceListFree(pcidevs);
 }
 
-- 
1.7.6




More information about the libvir-list mailing list