[Date Prev][Date Next] [Thread Prev][Thread Next]
Re: [libvirt] [Qemu-devel] Re: Libvirt debug API
- From: Avi Kivity <avi redhat com>
- To: Anthony Liguori <anthony codemonkey ws>
- Cc: Libvirt <libvir-list redhat com>, Jiri Denemark <jdenemar redhat com>, qemu-devel nongnu org
- Subject: Re: [libvirt] [Qemu-devel] Re: Libvirt debug API
- Date: Mon, 26 Apr 2010 08:56:35 +0300
On 04/26/2010 04:53 AM, Anthony Liguori wrote:
On 04/25/2010 06:51 AM, Avi Kivity wrote:
It depends on what things you think are important. A lot of
libvirt's complexity is based on the fact that it uses a daemon and
needs to deal with the security implications of that. You don't
need explicit labelling if you don't use a daemon.
I don't follow. If you have multiple guests that you want off each
other's turf you have to label their resources, either statically or
dynamically. How is it related to a daemon being present?
Because libvirt has to perform this labelling because it loses the
original user's security context.
If you invoke qemu with the original user's credentials that launched
the guest, then you don't need to do anything special with respect to
IOW, libvirt does not run guests as separate users which is why it
needs to deal with security in the first place.
What if one user has multiple guests? isolation is still needed.
One user per guest does not satisfy some security requirements. The 'M'
in selinux stands for mandatory, which means that the entities secured
can't leak information even if they want to (scenario: G1 breaks into
qemu, chmods files, G2 breaks into qemu, reads files).
This is really the qemu model (as opposed to the xend model).
(and the qemud model).
And I've said in the past that I don't like the idea of a qemud :-)
I must have missed it. Why not? Every other hypervisor has a central
In theory, it does support this with the session urls but they are
currently second-class citizens in libvirt. The remote dispatch
also adds a fair bit of complexity and at least for the use-cases
I'm interested in, it's not an important feature.
If libvirt needs a local wrapper for interesting use cases, then it
has failed. You can't have a local wrapper with the esx driver, for
This is off-topic, but can you detail why you don't want remote
dispatch (I assume we're talking about a multiple node deployment).
Because there are dozens of remote management APIs and then all have a
concept of agents that run on the end nodes. When fitting
virtualization management into an existing management infrastructure,
you are going to always use a local API.
When you manage esx, do you deploy an agent? I thought it was all done
via their remote APIs.
Every typical virtualization use will eventually grow some
non-typical requirements. If libvirt explicitly refuses to support
qemu features, I don't see how we can recommend it - even if it
satisfies a user's requirements today, what about tomorrow? what
about future qemu feature, will they be exposed or not?
If that is the case then we should develop qemud (which libvirt and
other apps can use).
(even if it isn't the case I think qemud is a good idea)
Yeah, that's where I'm at. I'd eventually like libvirt to use our
provided API and I can see where it would add value to the stack (by
doing things like storage and network management).
We do provide an API, qmp, and libvirt uses it?
That's not what the libvirt community wants to do. We're very
bias. We've made decisions about how features should be exposed and
what features should be included. We want all of those features
exposed exactly how we've implemented them because we think it's the
I'm not sure there's an obvious way forward unless we decide that
there is going to be two ways to interact with qemu. One way is
through the libvirt world-view and the other is through a more qemu
centric view. The problem then becomes allowing those two models to
co-exist happily together.
I don't think there's a point in managing qemu through libvirt and
directly in parallel. It means a user has to learn both APIs, and
for every operation they need to check both to see what's the best
way of exploiting the feature. There will invariably be some friction.
Layers need to stack on top of each other, not live side by side or
bypass each other.
I agree with you theoretically but practically, I think it's immensely
useful as a stop-gap.
Sure. But please lets not start being clever with transactions and
atomic operations and stuff, it has to come with a label that says, if
you're using this, then something is wrong.
The alternative is to get libvirt to just act as a thin layer to
expose qemu features directly. But honestly, what's the point of
libvirt if they did that?
For most hypervisors, that's exactly what libvirt does. For Xen, it
also bypasses Xend and the hypervisor's API, but it shouldn't really.
Historically, xend was so incredibly slow (especially for frequent
statistics collection) that it was a necessity.
Ah, reimplement rather than fix.
Qemu is special due to the nonexistence of qemud.
Why is sVirt implemented in libvirt? it's not the logical place for
it; rather the logical place doesn't exist.
sVirt is not just implemented in libvirt. libvirt implements a
mechanism to set the context of a given domain and dynamically label
it's resources to isolate it.
The reason it has to assign a context to a given domain is that all
domains are launched from the same security context (the libvirtd
context) as the original user's context (the consumer of the libvirt
API) has been lost via the domain socket interface.
If you used the /session URL, then the domain would have the security
context of whomever created the guest which means that dynamic
labelling of the resources wouldn't be necessary (you would just do
This is certainly a more secure model and it's a feature of qemu that
I really wish didn't get lost in libvirt. Again, /session can do this
too but right now, /session really isn't usable in libvirt for qemu.
That's wrong for three reasons. First, selinux is not a uid replacement
(if it was libvirt could just suid $random_user before launching qemu).
Second, a single user's guests should be protected from each other.
Third, in many deployments, the guest's owner isn't logged in to supply
the credentials, it's system management that launches the guests.
There's also the case of resources that can't be permanently chowned or
assigned a security label, like disk volumes or assignable devices.
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
[Date Prev][Date Next] [Thread Prev][Thread Next]