[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] Different approach to locking



On Thu, Nov 11, 2010 at 10:49:27AM +0000, Richard W.M. Jones wrote:
> 
> I get the feeling that the locking manager is meant to be a
> libvirt-internal API.  I'll throw out this idea instead: How about
> making the concept of "reserving" a VM into a public libvirt API?
> 
>   /* Reserve 'dom'.
>    *
>    * flags:
>    *   VIR_DOMAIN_RESERVE_READONLY: reserve for read-only access
>    */
>   int virDomainReserve (virDomainPtr dom, unsigned flags);
> 
>   /* Drop reservation of 'dom'.
>    */
>   int virDomainUnReserve (virDomainPtr dom);
> 
> The reservation would also be dropped when the current libvirt
> connection is closed.
> 
> libvirt/libvirtd would acquire the locks on our behalf.

Unfortunately this can't work, because certain APIs in the lock plugin
are required to be run in between the fork+exec, so that the lock 
manager plugin has the PID of the process at time of lock acquisition.

There are also problems in the flexibility of the API when you just
have a global access mode for the whole VM.

<domain type='kvm'>
  <name>death</name>
  <uuid>c7a2edbd-edaf-9455-926a-d65c16db1809</uuid>
  ..snip..
  <devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/home/berrange/death.qcow'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/berrange/demo.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/home/berrange/bigdata.img'/>
      <target dev='sda' bus='scsi'/>
      <shared/>
    </disk>
  </devices>
</domain>

In guestfish, if you did this you'd be protected:

  add-domain death

But if you did this you would not be protected

  add-disk /home/berrange/death.qcow
  add-disk /home/berrange/demo.iso
  add-disk /home/berrange/bigdata.img

Even more fun if you do a mix of both...

  add-domain death
  add-disk /home/berrange/another.img

This also doesn't give libguestfs the opportunity to apply a different
locking mode per-disk, than what is implied by the <readonly/> and
<sharable/> flags. eg, 'add-domain death' would fail if there was another
VM running with demo.iso or bigdata.img, but the person using libguestfs
likely only cares about accessing & changing the main death.qcow image
file.

Also note that libvirt locking will not always be solely for disk
images. It is likely that we'll apply the locking to all other devices
in the XML (console, parallel, serial, channel, filesystem, net)
which have a backend that uses a file or host device. 

If you wanted to avoid directly using the libvirt lock manager plugins,
and use a public libvirt API, then the other option is for libguestfs to
create a custom XML with all the requested disks, and boot a transient
libvirt guest with the libguestfs kernel/initrd/appliance and attach to
the guestfsd inside that VM. Then the locking would all 'just work' since
libguestfs would be a normal libvirt client application.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London    -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :|
|: http://autobuild.org        -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]