[Date Prev][Date Next] [Thread Prev][Thread Next]
Re: [libvirt] libguestfs integration: rich disk access for libvirt applications
- From: "Daniel P. Berrange" <berrange redhat com>
- To: Stefan Hajnoczi <stefanha gmail com>
- Cc: Zhi Hui Li <zhihuili linux vnet ibm com>, Robert Wang <wdongxu linux vnet ibm com>, libvir-list redhat com
- Subject: Re: [libvirt] libguestfs integration: rich disk access for libvirt applications
- Date: Tue, 27 Sep 2011 12:20:31 +0100
On Tue, Sep 27, 2011 at 10:10:00AM +0100, Stefan Hajnoczi wrote:
> Libguestfs provides very nice functionality for applications that need
> to work with disk images. The includes provisioning applications that
> set up or customize disk images. It also includes backup applications
> that want to look inside disk image snapshots - both at the block and
> file level.
> What's missing for libguestfs to fill this role is integration that
> allows libvirt and libguestfs to work together with the same
> network-transparent remote API approach.
> In the past we have discussed remoting libguestfs and Richard
> presented possible approaches:
> Could libvirt could provide a secure channel over which the libguestfs
> client that an application is linked against talks to the remote
> Would libguestfs need some new parameters to connect to a remote
> libvirtd and create its appliance VM?
> In terms of application use cases, I'm thinking along the lines of
> using libvirt to enumerate storage volumes and then switching to
> libguestfs to actually access the storage volume remotely. Is there
> enough information exposed by libvirt today to switch over to
> libguestfs and point it at the storage volume/image file?
IMHO, the overall goal for integration is that anytime you have a
libvirt connection, you should be able to use libguestfs to access
storage volumes or guests without further remote configuration
The primary way to achieve this is if all communication takes place
over the existing libvirt data transport, and not any out of band
transport (NBD, NFS, SSH, whatever)
The next obvious question is which side of the libvirt connection
should the guestfs daemon/appliance be running.
There are 3 main scenarios wrt guests
1. Guest is running and has a guestfs channel already present
2. Guest is running and does not have a guestfs channel present
3. Guest is not running.
In case 1, obviously the daemon is running server side in the main
In case 2, you could either boot the guestfs appliance readonly,
or hotplug a virtio channel for guestfs into the running guest
to get readwrite access (assumes the guest OS has neccessary
magic to auto-launch the daemon). The latter is obviously server
side again, the former could be client or server side.
In case 3, you need to boot the appliance. This could be client
or server side.
To run the appliance server side, will require that libvirtd gains
the ability to spawn a guest with the appliance, and tunnel the
host side of the virtio channel back over the libvirt connection
to the client. On the client side, either libguestfs needs to have
a set of pluggable I/O functions, which we can redirect to use the
virStreamPtr APIs, or libvirt would have to turn its stream back
into a UNIX socket. The latter is doable without any more libguestfs
changes, but introduces extra I/O copying, while the former is most
effecient, but requires libguestfs changes.
To run the appliance client side, will require that libvirtd gains
the ability to tunnel disk access over the libvirt connection to
the client, which then runs the appliance somehow. On the client
side, either QEMU needs a block driver with a set of pluggable I/O
functions, which we can redirect to use libvirt APIs, or libvirt
would have to turn its stream back into a UNIX socket running NBD
protocol, or a create a userspace block device (FUSE but for block
devs). Again the latter is doable without any QEMU changes, but
has extra I/O copying, whjile the former is most efficient.
Finally, there are some other things worth considering where libguestfs
currently has functionality gaps wrt libvirt.
- Integration with the libvirt lock manager infrastructure to
prevent concurrent R/W guest disk access
- Integration with the libvirt secret manager infrastructure
to allow access to encrypted QCow2 disks
- Integration with sVirt to ensure the appliance runs in a
strictly confined context
- Hotplug to allow extra disks to be inserted/removed to/from
a running libguestfs appliance
Running the appliance server side, spawned by libvirt would allow
all of those points to be satisfied.
There is a final problem in that not all hypervisors feature libvirtd,
eg VMWare ESX, Hyper V and VirtualBox. Some of them might expose guest
disks via HTTP or some other protocols that QEMU might be able to
access directly. Others though would require that you setup some kind
of shared filesystem (eg mount NFS) to access them. Others might let
you SSH in (with suitable credentials) which lets you use FUSE SSHFS.
With such hypervisors it is more or less impossible to satisfy my
initial requirement that 'any time you have a libvirt connection
you can run libguestfs without further admin configuration'. Probably
the best you can do here is to ensure that there is one API you use
for access guest disks.
One other point worth mentioning is that libguestfs.so does not want
to directly link to libvirt.so, and vica-verca, to ensure we both
avoid pulling major new dependancy chains for all users.
Similarly, if at all possible, any existing libguestfs application
would like to be able to 'just work' with any libvirt integration
without further code changes.
Now what do I think we should do. Personally I would really like to
have libguestfs be integrating with the lock manager, secret manager
and sVirt infrastructure in libvirt. The only real practical way for
this to happen is if the appliance runs server side, spawned via the
normal QEMU driver guest startup code in libvirt. So I discount any
idea of running the appliance client side & tunnelling block devices
over libvirt. Also note that different solutions may be required for
hypervisors without libvirt. I am ignoring such hypervisors for now.
If I were ignoring the requirement that libguestfs does not link to
libvirt, then you could quite likely make all this happen with only
a simple additional API in libvirt. We need an API to let a client
open a connection to a <channel> device, using the virStreamPtr
If the guests were not running, libguestfs would use virDomainCreate
to spawn a transient, auto-detroy guest, with a custom kernel/initrd
that runs the appliance, and an additional <channel> device, but with
all other parts of the guest XML unchanged. This would ensure all the
lock manager, sVirt and secret stuff 'just works'. If the guest is
already running, libguestfs would just query the XML to find the
<channel> device configuration. Then it could just use a new API
like virDomainOpenChannel(virStreamPtr, const char *channelid) to
get a stream to talk to the guestfs daemon with.
If libguestfs were just wanting to use a random disk device, not yet
associated with any guest, then it again would use virDOmainCreate
to spawn a transient, auto-destroy guest, with an XML config it fully
This would all make it possible for any guestfs based application
to work with libvirt without any app code changes.
I assume it is still the case, however, that libguestfs does *not*
want to link directly to libvirt and use it for guest creation.
Thus my idea is to find an alternative solution that gets as close
to that "ideal" setup as possible.
To do this I would create what I call a bridging library, to be
This would have a handful of API calls for handling the initial
creation/startup of the appliance & access to the vmchannel device
only, delegating everything else to normal libguestfs APIs.
- int virDomainStartGuestFS(virDomainPtr dom, int flags)
If the guest 'dom' is not already running, boot the guest
pointing it at the libguestfs kernel/initrd + vmchannel
- guestfs_h *virDomainOpenGuestFS(virDomainPtr dom)
Server side, find the vmchannel device for the guest, open a
stream for it. On the client side the stream would somehow be
proxied to a UNIX domain socket. We then call the libguestfs
APIs to neccessary to attach the external UNIX domain socket,
create the guestfs_h handle and return it.
With those two APIs (possibly 1 or 2 more), an application wanting
to use an integrated libguestfs+libvirt, would use libvirt-guestfs.so
to obtain their guestfs_h * handle, and then use all the other
libguestfs.so APIs as normal. This avoids having to wrap every
single libguestfs API in libvirt. For apps like virt-manager this
would easily be workable, other existing libguestfs apps would however
need to have some small changes in their initial connection setup
code to optionally use libvirt-guestfs.so
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
[Date Prev][Date Next] [Thread Prev][Thread Next]