[libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

Itamar Heim iheim at redhat.com
Mon Mar 12 18:53:09 UTC 2012


On 03/11/2012 05:33 PM, Anthony Liguori wrote:
> On 03/11/2012 09:56 AM, Gleb Natapov wrote:
>> On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote:
>>> -cpu best wouldn't solve this. You need a read/write configuration
>>> file where QEMU probes the available CPU and records it to be used
>>> for the lifetime of the VM.
>> That what I thought too, but this shouldn't be the case (Avi's idea).
>> We need two things: 1) CPU model config should be per machine type.
>> 2) QEMU should refuse to start if it cannot create cpu exactly as
>> specified by model config.
>
> This would either mean:
>
> A. pc-1.1 uses -cpu best with a fixed mask for 1.1
>
> B. pc-1.1 hardcodes Westmere or some other family
>
> (A) would imply a different CPU if you moved the machine from one system
> to another. I would think this would be very problematic from a user's
> perspective.
>
> (B) would imply that we had to choose the least common denominator which
> is essentially what we do today with qemu64. If you want to just switch
> qemu64 to Conroe, I don't think that's a huge difference from what we
> have today.
>
>>> It's a discussion about how we handle this up and down the stack.
>>>
>>> The question is who should define and manage CPU compatibility.
>>> Right now QEMU does to a certain degree, libvirt discards this and
>>> does it's own thing, and VDSM/ovirt-engine assume that we're
>>> providing something and has built a UI around it.
>> If we want QEMU to be usable without management layer then QEMU should
>> provide stable CPU models. Stable in a sense that qemu, kernel or CPU
>> upgrade does not change what guest sees.
>
> We do this today by exposing -cpu qemu64 by default. If all you're
> advocating is doing -cpu Conroe by default, that's fine.
>
> But I fail to see where this fits into the larger discussion here. The
> problem to solve is: I want to use the largest possible subset of CPU
> features available uniformly throughout my datacenter.
>
> QEMU and libvirt have single node views so they cannot solve this
> problem on their own. Whether that subset is a generic Westmere-like
> processor that never existed IRL or a specific Westmere processor seems
> like a decision that should be made by the datacenter level manager with
> the node level view.
>
> If I have a homogeneous environments of Xeon 7540, I would probably like
> to see a Xeon 7540 in my guest. Doesn't it make sense to enable the
> management tool to make this decision?

literally, or in capabilities?
literally means you want to allow passing the cpu name to be exposed to 
the guest?
if in capabilities, how would it differ from choosing the correct "cpu 
family"?
it wouldn't really be identical (say, number of cores/sockets and no VT 
for time being)

ovirt allows to set "cpu family" per cluster. assume tomorrow it could 
do it an even more granular way.
it could also do it automatically based on subset of flags on all hosts 
- but would it really make sense to expose a set of capabilities which 
doesn't exist in the real world (which iiuc, is pretty much aligned with 
the cpu families?), that users understand?





More information about the libvir-list mailing list