[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] PATCH: Support initial boot time CPU affinity mask



On Fri, May 16, 2008 at 10:50:20PM +0100, Daniel P. Berrange wrote:
> The XML format allows for an initial CPU mask to be specified for a guests
> vCPUs. eg with this XML:
> 
>   <vcpu cpuset='1-4,8-20,525'>1</vcpu>

  what about output. In the xen case we went though the exercise to
dump a cpuset string back only if it wasn't 'all set'.

> Since we have CPU pinning support from my previous patch, adding in the
> initial pinning is fairly easy. We first pass the '-S' arg to QEMU when
> forking it. This causes it to initialize, but not start the CPUs in the
> guest. We then set the affinity mask for all its CPUs, and then send 
> the 'cont' command to the monitor to start execution.
[...]
> +    /* Extract domain vcpu info */
> +    obj = xmlXPathEval(BAD_CAST "string(/domain/vcpu[1]/@cpuset)", ctxt);
> +    if ((obj == NULL) || (obj->type != XPATH_STRING) ||
> +        (obj->stringval == NULL) || (obj->stringval[0] == 0)) {
> +        /* Allow use on all CPUS */
> +        memset(def->cpumask, 1, QEMUD_CPUMASK_LEN);
> +    } else {
> +        char *set = (char *)obj->stringval;
> +        memset(def->cpumask, 0, QEMUD_CPUMASK_LEN);
> +        if (virParseCpuSet(conn, (const char **)&set,
> +                           0, def->cpumask,
> +                           QEMUD_CPUMASK_LEN) < 0) {
> +            qemudReportError(conn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
> +                             "%s", _("malformed vcpu mask information"));
> +            goto error;
> +        }
> +    }
> +    xmlXPathFreeObject(obj);

  virXPathString() would make it way cleaner IMHO

[...]
>      virBufferVSprintf(&buf, "  <memory>%lu</memory>\n", def->maxmem);
>      virBufferVSprintf(&buf, "  <currentMemory>%lu</currentMemory>\n", def->memory);
> -    virBufferVSprintf(&buf, "  <vcpu>%d</vcpu>\n", def->vcpus);
> +
> +    for (n = 0 ; n < QEMUD_CPUMASK_LEN ; n++)
> +        if (def->cpumask[n] != 1)
> +            allones = 0;
> +
> +    if (allones) {
> +        virBufferVSprintf(&buf, "  <vcpu>%d</vcpu>\n", def->vcpus);

  okay that answers my question :-)


  Patch looks fine. Since we have migration nearly ready, it would be 
interesting to check the combination of both at some point too,

  Looks fine to me, +1,

Daniel

-- 
Red Hat Virtualization group http://redhat.com/virtualization/
Daniel Veillard      | virtualization library  http://libvirt.org/
veillard redhat com  | libxml GNOME XML XSLT toolkit  http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine  http://rpmfind.net/


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]