[libvirt] [PATCH] Support VCPU hotplug in QEMU guests

Daniel Veillard veillard at redhat.com
Tue Feb 16 16:28:10 UTC 2010


On Tue, Feb 16, 2010 at 04:23:25PM +0000, Daniel P. Berrange wrote:
> On Tue, Feb 16, 2010 at 03:55:26PM +0100, Daniel Veillard wrote:
> > On Tue, Feb 16, 2010 at 12:12:31PM +0000, Daniel P. Berrange wrote:
> > > QEMU has a monitor command 'set_cpu' which allows a specific
> > > CPU to be toggled between online& offline state. libvirt CPU
> > > hotplug does not work in terms of individual indexes CPUs.
> > > Thus to support this, we iteratively toggle the online state
> > > when the total number of vCPUs is adjusted via libvirt
> > > 
> > > NB, currently untested since QEMU segvs when running this!
> > > 
> > > * src/qemu/qemu_driver.c: Toggle online state for CPUs when
> > >   doing hotplug
> > > * src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
> > >   src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h,
> > >   src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add
> > >   monitor API for toggling a CPU's online status via 'set_cpu
> > 
> > > +static int qemudDomainHotplugVcpus(virDomainObjPtr vm, unsigned int nvcpus)
> > > +{
> > > +    qemuDomainObjPrivatePtr priv = vm->privateData;
> > > +    int i, rc;
> > > +    int ret = -1;
> > > +
> > > +    /* We need different branches here, because we want to offline
> > > +     * in reverse order to onlining, so any partial fail leaves us in a
> > > +     * reasonably sensible state */
> > > +    if (nvcpus > vm->def->vcpus) {
> > > +        for (i = vm->def->vcpus ; i < nvcpus ; i++) {
> > > +            /* Online new CPU */
> > > +            rc = qemuMonitorSetCPU(priv->mon, i, 1);
> > > +            if (rc == 0)
> > > +                goto unsupported;
> > > +            if (rc < 0)
> > > +                goto cleanup;
> > > +
> > > +            vm->def->vcpus++;
> > > +        }
> > > +    } else {
> > > +        for (i = vm->def->vcpus - 1 ; i >= nvcpus ; i--) {
> > > +            /* Offline old CPU */
> > > +            rc = qemuMonitorSetCPU(priv->mon, i, 0);
> > > +            if (rc == 0)
> > > +                goto unsupported;
> > > +            if (rc < 0)
> > > +                goto cleanup;
> > > +
> > > +            vm->def->vcpus--;
> > > +        }
> > > +    }
> > > +
> > > +    ret = 0;
> > > +
> > > +cleanup:
> > > +    return ret;
> > > +
> > > +unsupported:
> > > +    qemudReportError(NULL, NULL, NULL, VIR_ERR_OPERATION_INVALID, "%s",
> > > +                     _("cannot change vcpu count of an active domain"));
> > > +    goto cleanup;
> > > +}
> > 
> >   Hum, seems QEMu would allow CPU 0 and 2 to be online and CPU 1 to be
> >   offline for example but our code really assumes all CPUs from 0 to
> >   vm->def->vcpus are always online (and other offline). I hope this
> >   restriction we impose won't be a problem later.
> 
> Yes, that is correct. With the current libvirt API / XML description,
> this is the best that we can do. In the future we might find we have
> to extend our format to allow exact specification of which individual
> CPUs are online. Currently I don't think this matters, but if we add
> support for NUMA topology inside the guest, we might need finer 
> control. eg, you want to unplug a specific CPU from a specific guest
> NUMA cell.

  Okay, we're on the same page.

> > 
> > [...]
> > > +int qemuMonitorJSONSetCPU(qemuMonitorPtr mon,
> > > +                          int cpu, int online)
> > > +{
> > > +    int ret;
> > > +    virJSONValuePtr cmd = qemuMonitorJSONMakeCommand("balloon",
> > > +                                                     "U:cpu", (unsigned long long)cpu,
> > > +                                                     "s:state", online ? "online" : "offline",
> > > +                                                     NULL);
> > > +    virJSONValuePtr reply = NULL;
> > > +    if (!cmd)
> > > +        return -1;
> > > +
> > > +    ret = qemuMonitorJSONCommand(mon, cmd, &reply);
> > > +
> > > +    if (ret == 0) {
> > > +        /* XXX See if CPU soft-failed due to lack of ACPI */
> > > +#if 0
> > > +        if (qemuMonitorJSONHasError(reply, "DeviceNotActive") ||
> > > +            qemuMonitorJSONHasError(reply, "KVMMissingCap"))
> > > +            goto cleanup;
> > > +#endif
> > 
> >   Hum ... why can't we activate this now ?
> 
> QEMU doesn't actually return those errors in this case. This was a cut+paste
> from teh balloon driver that I left in to remind me to investigate that later

  ah, okay

  re-ACK

   thanks !

Daniel

-- 
Daniel Veillard      | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
daniel at veillard.com  | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library  http://libvirt.org/




More information about the libvir-list mailing list