[libvirt] [RFC] qemu: Redesigning guest CPU configuration

Jiri Denemark jdenemar at redhat.com
Mon Jun 22 16:43:56 UTC 2015


On Mon, Jun 22, 2015 at 17:09:22 +0100, Daniel P. Berrange wrote:
> On Mon, Jun 22, 2015 at 05:58:46PM +0200, Jiri Denemark wrote:
> > However, knowing all the details about a guest CPU used by QEMU for a
> > given CPU model on a specific machine type is not enough to enforce ABI
> > stability. Without using -cpu Model,enforce (or an equivalent of
> > checking filtered-features via QMP) QEMU may silently filter features it
> > cannot provide on the current host. Even in case of TCG some features
> > are not supported, e.g., -cpu SandyBridge will always fail to start in
> > enforcing mode. Even doing something ugly and using the enforce mode
> > only for new machine types is not going to work because after a QEMU
> > upgrade new libvirt would be incompatible with older libvirt.
> 
> I'm not sure I follow the scenario you're concerned with.
> 
> Lets, say we have guest XML <cpu><model>SandyBridge</model></cpu> and
> so we're using the new "custom" -cpu arg that QEMU supports. Are you
> saying that we won't be able to live migrate from the "-cpu custom"
> with new QEMU, to "-cpu SandyBridge" with old QEMU, even if the CPU
> seen by the guest is identical ?

There are two more or less separate issues. The first one is switching
from -cpu SandyBridge to -cpu custom. This should be safe and having the
mapping between machine types and CPU model version should make both
command lines compatible even during migration.

The second one is adding enforce, i.e., -cpu SandyBridge,enforce or -cpu
custom,enforce (libvirt will likely implement it in a different way but
the effect will be the same) to make sure QEMU does not filter any CPU
feature. Without "enforce", QEMU may silently drop any feature requested
by the SandyBridge CPU model. During migration, QEMU on both sides can
filter different features in case the host CPUs, kernel versions or
settings, QEMU versions, or BIOS settings differ. So we really need to
start making sure QEMU does not filter anything. But we shouldn't do
that for existing domains because they could fail to start or migrate
because QEMU filtered some features and we didn't care so far.

> > That said, I don't really see a way to do all this automatically without
> > an explicit switch in a domain XML. Be it a new CPU mode or an attribute
> > which would request enforcing ABI stability.
> 
> I don't like the idea of adding more to the mode=custom|host-model|passthroug
> options, but perhaps we could signify this in a differnet way.

I'm not a big fan of this either.

> For example, what we're really doing here is switching between use of libvirt
> and use of QEMU for CPU emulation. In similar cases, for other device types
> we use the <driver> element to identify the backend impl. So perhaps we
> could do a
> 
>   <cpu>
>      ...
>     <driver name="libvirt|qemu"/>
>   </cpu>
> 
> To distinguish between use of libvirt and use of QEMU for the CPU model/feature
> handling ? Ideally if not specified, then we'd magically choose the "best"
> approach given the QEMU we have available

This would cover the first issue (-cpu Model vs. -cpu custom), which I
think can be done automatically and we shouldn't need any XML
modifications for it.

I'm more concerned about the second issue (enforcing ABI stability). We
could automatically enable enforcing mode for new CPU models, but we
need and explicit switch for existing models. I was thinking about an
attribute for <cpu> or maybe even batter for <model> which would turn
enforcing on. Something like

    <cpu mode='custom' match='exact'>
        <model fallback='allow' check='strict|relaxed'>...</model>
        ...
    </cpu>

Any CPU model which is not currently known to libvirt could easily
default to check='strict'.

Jirka




More information about the libvir-list mailing list