[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] API Proposal: Block device streaming (V3)



On Fri, May 20, 2011 at 07:42:26AM -0500, Adam Litke wrote:
> Hi Dan, thank you for another round of review.  Stefan and Anthony,
> could you respond to Dan's suggestion of adding qemu events for stream
> completion and errors at the bottom of this message?
> 
> On 05/20/2011 06:07 AM, Daniel P. Berrange wrote:
> > On Wed, May 18, 2011 at 02:08:12PM -0500, Adam Litke wrote:
> >> Version 2 of the block device streaming API diverged 
> >> significantly from the original API provided by Qemu.
> >>  In an effort to provide an API that is designed to work
> >> well for the specific task of streaming data to a local
> >> device from its backing store I am proposing this:
> >> version 3 of the block device streaming API.
> > 
> > Can you clarify what happens upon completion of streaming ?
> > 
> > Is the backing store going to automagically go away, or
> > will there need to be a separate API to explicitly remove
> > the (now unncessary) backing store ?
> 
> As soon as an image gains independence from its backing store, qemu
> removes the link to the backing file.

In that case, we'll defintely require an event from the QMP
monitor, to allow us to run the security drivers to revoke
QEMU access to the backing store.

> > I'm still not really a fan of the name 'Stream' in
> > the APIs because it doesn't give any indication of
> > what the effect of the API is. 'Streaming' refers to
> > the manner in which the API operates, rather what its
> > purpose/goal is.
> > 
> > If the backing store is automagically removed, is this
> > better described as a 'Rebase' command/operation ?
> 
> qemu-img already has a 'rebase' command so I would hate to introduce
> confusion by overloading that term here.  How about 'Populate'?

Is this not a similar operation to that command ?  IIUC qemu-img
is doing an offline rebase, while this API is doing an online
rebase.


> >> Some noteworthy changes from earlier versions:
> >>
> >> The offset and length parameters to virDomainStream
> >> have been replaced by an iterator type.  Manual
> >> specification of ranges is outside of the scope of
> >> the API and does not add utility.  Instead, an iterator
> >> is used to perform a series of individual operations in
> >> an efficient manner without exposing device format
> >> particulars.
> >>
> >> I have added a new call virDomainBlockStreamAbort()
> >> that can be used to cancel an active stream operation.
> >> This is an improvement over the use of flags to override
> >> the meaning of virDomainBlockStream().
> >>
> >> Block devices are specified by the fully-qualified path
> >> to the source file.  This information is easier for API
> >> users to provide than the block device alias.
> >>
> >> All API functions return an int.  Rather than returning
> >> an unsigned long long, virDomainBlockStream() updates
> >> the the iterator (which is passed in by reference) in
> >> place.
> >>
> >> ----- API V3 -----
> >>
> >> /*
> >>  * Disk Streaming
> >>  */
> >> typedef enum {
> >>     /*
> >>      * If set, virDomainBlockStream() will begin streaming the entire device
> >>      * continuously.  The status can be checked and the operation aborted by using 
> >>      * the functions virDomainGetBlockStreamInfo() and virDomainBlockStreamAbort().
> >>      */
> >>     VIR_DOMAIN_BLOCK_STREAM_CONTINUOUS = 1,
> >> } virDomainStreamDiskFlags;
> >>
> >> #define VIR_DOMAIN_BLOCK_STREAM_PATH_MAX 1024
> >>
> >> /* An iterator for initiating and monitoring block streaming operations */
> >> typedef unsigned long long virDomainBlockStreamCursor;
> >>
> >> typedef struct _virDomainBlockStreamInfo virDomainBlockStreamInfo;
> >> struct _virDomainBlockStreamInfo {
> >>     char path[VIR_DOMAIN_BLOCK_STREAM_PATH_MAX];
> > 
> > We really shouldn't be having hardcoded path lengths
> > in the public API. IMHO this field should be dropped
> > and the StreamInfo API should have the path passed in
> 
> I'm okay with changing this as you suggest, but the effect will be that
> we lose a way to query for active streams.  I guess an API user could
> separately determine the paths of all connected disks and perform
> virDomainBlockStreamInfo() on each path.
> 
> >>     /*
> >>      * The following fields provide an indication of streaming progress.  @cur
> >>      * indicates the current position and will be between 0 and @end.  @end is
> >>      * the final cursor position for this operation and represents completion.  To
> >>      * approximate progress, divide @cur by @end.
> >>      */
> >>     virDomainBlockStreamCursor cur;
> >>     virDomainBlockStreamCursor end;
> >> };
> >> typedef virDomainBlockStreamInfo *virDomainBlockStreamInfoPtr;
> >>
> >> /**
> >>  * virDomainBlockStream:
> >>  * @dom: pointer to domain object
> >>  * @path: Fully-qualified filename of disk 
> >>  * @cursor: pointer to a virDomainBlockStreamCursor, or NULL
> >>  * @flags: One of virDomainStreamDiskFlags, or zero
> >>  *
> >>  * Initate a streaming operation on the named block device associated with the
> >>  * given domain.  If VIR_DOMAIN_BLOCK_STREAM_CONTINUOUS is specified, the entire
> >>  * device will be streamed in the background.  Otherwise, the value stored in
> >>  * cursor will be used to stream an increment.
> >>  *
> >>  * Returns -1 in case of failure, 0 when successful.  On success and when @flags
> >>  * does not contain VIR_DOMAIN_BLOCK_STREAM_CONTINUOUS, the iterator @cursor will
> >>  * be updated to the proper value for use in a subsequent call.
> >>  */
> >> int                 virDomainBlockStream(virDomainPtr dom,
> >>                                          const char *path,
> >>                                          virDomainBlockStreamCursor *cursor,
> >>                                          unsigned int flags);
> >>
> >> /**
> >>  * virDomainBlockStreamAbort:
> >>  * @dom: pointer to domain object
> >>  * @path: fully-qualified filename of disk
> >>  * @flags: currently unused, for future extension
> >>  *
> >>  * Cancel a disk streaming operation that has been previously started by a call to
> >>  * virDomainBlockStream() with the VIR_DOMAIN_BLOCK_STREAM_CONTINUOUS flag.
> >>  *
> >>  * Returns -1 in case of failure, 0 when successful.
> >>  */
> >> int                 virDomainBlockStreamAbort(virDomainPtr dom,
> >>                                               const char *path,
> >>                                               unsigned int flags);
> >>
> >> /**
> >>  * virDomainGetBlockStreamInfo:
> >>  * @dom: pointer to domain object
> >>  * @info: pointer to an array of virDomainBlockStreamInfo objects
> >>  * @nr_results: Number of results requested
> >>  * @flags: currently unused, for future extension
> >>  *
> >>  * Request information on active block streaming operations for the given domain.
> >>  * The caller must supply an allocated array of virDomainBlockStreamInfo objects
> >>  * Information about up to @nr_results active streams will be stored in @info.
> >>  *
> >>  * Returns -1 in case of failure, or the number of results that were stored.
> >>  */
> >> int                 virDomainGetBlockStreamInfo(virDomainPtr dom,
> >>                                                 virDomainBlockStreamInfoPtr info,
> >>                                                 unsigned int nr_results,
> >>                                                 unsigned int flags);
> > 
> > This really needs to be:
> > 
> >    int virDomainGetBlockStreamInfo(virDomainPtr dom,
> >                                    const char *path,
> >                                    virDomainBlockStreamInfoPtr info,
> >                                    unsigned int flags);
> > 
> > If the apps want info on > 1 path, then call it multiple times with each
> > path.
> 
> Sure.  I don't have a problem changing it to this.  Is there an easy way
> for an API user to enumerate the disk devices that are connected to the
> domain?

Well, 'easy' depends on what language/apis you're using. If your
language provides easy XPath APIs, then it is easy ;-P

eg,

   const char *xml = virDomainGetXMLDesc(dom);
   char **disks;
   size_t ndisks = doXPathQuery(xml, "string(/domain/devices/disk[ type='file'||@type='block']/source/@path", &disks);
   for (i = 0 ; i < ndisks ; i++) {
      virDomainBlockStreamInfoPtr info;
      virDomainGetBlockStreamInfo(dom, disks[i], &info, 0);
   }

Most of the time of course, the application querying the status
will already know the disks it cares about, because it was the
one who initiated the streaming operation in the first place.

Separately from this though, one of the things we're looking at
for the API roadmap is a way to provide a programmatic object
based API for querying guest configuration, as an alternative
to XML. So I don't think the issue of how easy it is to get
the list of disks, should be a limiting factor for picking the
best design for this API. We should just assume that we have
an 'easy' API for that.

> 
> > In the VIR_DOMAIN_BLOCK_STREAM_CONTINUOUS case, I think it is desirable
> > to have an async event emitted when the streaming is completed, so we
> > don't force apps to poll on the GetBlockStreamInfo if they only care
> > about completion. The event would also allow apps to determine if the
> > background stream job failed, which isn't something that's exposed in
> > any of these APis.
> > 
> > eg, in virDomainEventID enum, add
> > 
> >    VIR_DOMAIN_EVENT_ID_BLOCK_STREAM = 7,
> > 
> >    typedef enum {
> >       VIR_DOMAIN_BLOCK_STREAM_COMPLETED,
> >       VIR_DOMAIN_BLOCK_STREAM_FAILED,
> >    } virConnectDomainEventlockStreamStatus;
> > 
> >   typedef void (*virConnectDomainEventBlockStreamCallback(virConnectPtr conn,
> >                                                           virDomainPtr dom,
> >                                                           const char *path,
> >                                                           int status);
> > 
> > Internally this would have to be wired upto an event QEMU emits. If
> > completion automatically removes the backing store from the block device,
> > then we need the async event from QEMU regardless, since we'll need to
> > perform SELinux relabelling to deny the VM future access to the now
> > unused backing store.
> 
> This seems like a good idea to me.  Stefan and Anthony:  What do you
> guys think about adding these events to Qemu?  Is this something we
> could easily incorporate into the Qemu API?

We'd want it at the QMP level basically

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]