[libvirt] [PATCH 3/3] qemu: Return true pining info when using numad

John Ferlan jferlan at redhat.com
Wed Aug 5 01:27:13 UTC 2015


$SUBJ

s/pining/pinning

Or perhaps - "qemu: Use numad information when getting pin information""

On 07/26/2015 12:57 PM, Martin Kletzander wrote:
> Pinning information returned for emulatorpin and vcpupin calls is being
> returned from our data without querying cgroups for some time.  However,
> not all the data were utilized.  When automatic placement is used the
> information is not returned for the calls mentioned above.  Since the
> numad hint in private data is properly saved/restored, we can safely use
> it to return true information.
> 
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1162947
> 
> Signed-off-by: Martin Kletzander <mkletzan at redhat.com>
> ---
>  src/qemu/qemu_driver.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 

Should qemuDomainGetIOThreadsConfig be adjusted as well?  In the for
loop that's fetching/filling in the iothreadid there's a filling of the
cpumask as well.

Patches seem reasonable otherwise, although patch2 could have a wee bit
more information in the commit log to explain what's being done...
Beyond that does that value matter if placement_mode !=
VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO? or if
!virDomainDefNeedsPlacementAdvice (from qemuProcessStart)?  Was checking
where it was set and if it's set to something reasonable...

John

> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 40c882c4ba88..1e090bb5c36b 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -5224,6 +5224,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
>      int ret = -1;
>      int hostcpus, vcpu;
>      virBitmapPtr allcpumap = NULL;
> +    qemuDomainObjPrivatePtr priv = NULL;
> 
>      virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
>                    VIR_DOMAIN_AFFECT_CONFIG, -1);
> @@ -5244,6 +5245,7 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
>          goto cleanup;
> 
>      virBitmapSetAll(allcpumap);
> +    priv = vm->privateData;
> 
>      /* Clamp to actual number of vcpus */
>      if (ncpumaps > def->vcpus)
> @@ -5262,6 +5264,9 @@ qemuDomainGetVcpuPinInfo(virDomainPtr dom,
> 
>          if (pininfo && pininfo->cpumask)
>              bitmap = pininfo->cpumask;
> +        else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO &&
> +                 priv->autoCpuset)
> +            bitmap = priv->autoCpuset;
>          else
>              bitmap = allcpumap;
> 
> @@ -5412,6 +5417,7 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom,
>      int hostcpus;
>      virBitmapPtr cpumask = NULL;
>      virBitmapPtr bitmap = NULL;
> +    qemuDomainObjPrivatePtr priv = NULL;
> 
>      virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
>                    VIR_DOMAIN_AFFECT_CONFIG, -1);
> @@ -5428,10 +5434,15 @@ qemuDomainGetEmulatorPinInfo(virDomainPtr dom,
>      if ((hostcpus = nodeGetCPUCount(NULL)) < 0)
>          goto cleanup;
> 
> +    priv = vm->privateData;
> +
>      if (def->cputune.emulatorpin) {
>          cpumask = def->cputune.emulatorpin;
>      } else if (def->cpumask) {
>          cpumask = def->cpumask;
> +    } else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO &&
> +               priv->autoCpuset) {
> +        cpumask = priv->autoCpuset;
>      } else {
>          if (!(bitmap = virBitmapNew(hostcpus)))
>              goto cleanup;
> 




More information about the libvir-list mailing list