[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] [PATCH v3] qemu: Allow domain reboot after core dump



On 09/23/2011 03:47 AM, Michal Privoznik wrote:
This patch introduces possibility to reboot domain after core dump
finishes. The new flag VIR_DUMP_REBOOT was added to
virDomainCoreDumpFlags. The new functionality is accessible via virsh
too: virsh dump --reboot<domain>
---
diff to v2:
-issue reset command instead of qemuDomainReboot

  include/libvirt/libvirt.h.in |    1 +
  src/qemu/qemu_driver.c       |   54 +++++++++++++++++++++++++++++++++++++++--
  tools/virsh.c                |    3 ++
  3 files changed, 55 insertions(+), 3 deletions(-)

Missing documentation of the new flag in src/libvirt.c and tools/virsh.pod. Also, is the new flag compatible with VIR_DUMP_CRASH or VIR_DUMP_LIVE? For example, we already declare _CRASH and _LIVE as mutually exclusive, and off-hand, it seems like REBOOT is exclusive with both of these as well.

Also, I'd split this patch into two pieces - one for documenting the new API (include/, src/libvirt.c, tools/), and the other for the qemu implementation of the new API (since most of my technical concerns are over the qemu implementation).

Independent of your patch, I also just realized that virDomainSave[Flags], virDomainRestore[Flags], virDomainSaveImageGetXMLDesc, virDomainSaveImageDefineXML, and virDomainCoreDump all have the same design issue: they are not very nice for remote management. Each of these functions convert a relative path name into absolute name on the client side, before passing the string to the remote side, even though the actual file to be used is on the remote side. This is not always guaranteed to work, and also leaves things stuck with the file being on the remote side (no way to dump the core across the network back to the client, like there is with virDomainScreenshot).

At some point, we may want to introduce new API that performs these types of operations on a stream, where the client can then manage the stream. And/or we may want to introduce a way to specify the "file" to manipulate as a virStorageVolPtr, or even a string relative to a virStoragePoolPtr, for more precise control of where the file ends up without having to first "absolutize" the file argument in the client.

+++ b/src/qemu/qemu_driver.c
@@ -2860,6 +2860,47 @@ getCompressionType(struct qemud_driver *driver)
      return compress;
  }

+static int
+doSystemReset(struct qemud_driver *driver,
+              virDomainObjPtr *vm)
+{
+    int ret = -1;
+    qemuDomainObjPrivatePtr priv;
+
+    /* Okay, this should never happen, but doesn't hurt to check. */
+    if (!driver || !vm || !*vm) {
+        qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+                        _("invalid argument supplied"));
+        goto cleanup;
+    }
+
+    priv = (*vm)->privateData;
+
+    if (qemuDomainObjBeginJob(driver, *vm, QEMU_JOB_MODIFY)<  0)
+        goto cleanup;
+
+    if (!virDomainObjIsActive(*vm)) {
+        qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+                        _("guest unexpectedly quit"));
+        goto endjob;
+    }

Just wondering if this is a case where it would actually make sense to start up a new qemu process, instead of just report that the guest unexpectedly quit.

+
+    qemuDomainObjEnterMonitorWithDriver(driver, *vm);
+    if (qemuMonitorSystemReset(priv->mon)<  0) {
+        qemuDomainObjExitMonitorWithDriver(driver, *vm);

This is the key point of this new function.  But I can't help but wonder...

+        goto endjob;
+    }
+    qemuDomainObjExitMonitorWithDriver(driver, *vm);
+
+    ret = 0;
+
+endjob:
+    if (qemuDomainObjEndJob(driver, *vm) == 0)
+        *vm = NULL;
+cleanup:
+    return ret;
+}
+
  static int qemudDomainCoreDump(virDomainPtr dom,
                                 const char *path,
                                 unsigned int flags)
@@ -2870,7 +2911,8 @@ static int qemudDomainCoreDump(virDomainPtr dom,
      int ret = -1;
      virDomainEventPtr event = NULL;

-    virCheckFlags(VIR_DUMP_LIVE | VIR_DUMP_CRASH | VIR_DUMP_BYPASS_CACHE, -1);
+    virCheckFlags(VIR_DUMP_LIVE | VIR_DUMP_CRASH |
+                  VIR_DUMP_BYPASS_CACHE | VIR_DUMP_REBOOT, -1);

      qemuDriverLock(driver);
      vm = virDomainFindByUUID(&driver->domains, dom->uuid);
@@ -2949,11 +2991,17 @@ endjob:
      }

  cleanup:
-    if (vm)
-        virDomainObjUnlock(vm);
      if (event)
          qemuDomainEventQueue(driver, event);
+
+    if ((ret == 0)&&  (flags&  VIR_DUMP_REBOOT)&&  vm) {
+        ret = doSystemReset(driver,&vm);

This ends up starting a second job. Wouldn't it be better to reuse the first job, by checking for VIR_DUMP_REBOOT in the endjob label alongside the check for 'resume && paused', and make the call to qemuMonitorSystemReset instead of qemuProcessStartCPUs at that point, so that the cleanup label remains untouched?

--
Eric Blake   eblake redhat com    +1-801-349-2682
Libvirt virtualization library http://libvirt.org


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]