[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [et-mgmt-tools] [RFC] virt-install: remote guest creation

Michael DeHaan wrote:
> Cole Robinson wrote:
>> I've taken a stab at getting remote guest creation up and running
>> for virt-install. Most of the existing code translates well to the
>> remote case, but the main issue is storage: how does the user tell
>> us where to create and find existing storage/media, and how can we
>> usefully validate this info. The libvirt storage API is the lower
>> level mechanism that allows this fun stuff to happen, its really
>> just a matter of choosing a sane interface for it all.
>> The two interface problems we have are:
>> - Changes to VirtualDisk to handle storage apis
>> - Changes to virt-install cli to allow specifying storage info
>> For VirtualDisk, I added two options
>>    - volobj     : a libvirt virStorageVol instance
>>    - volinstall : a virtinst StorageVolume instance
> Do you have examples of what this might look like for VirtualDisk?   I'm 
> interested in teaching koan how to install on remote hosts.

I've attached a pretty ugly script I was using just to basically test
this stuff at first. It has hardcoded values specific to my machine
so it won't work if you run it. However it has an example that covers
both of the above cases.

Please read my below comments though regarding the libvirt storage 

>> If the user wants the VirtualDisk to use existing storage, they
>> will need to query libvirt for the virStorageVol and pass this
>> to the VirtualDisk, which will take care of the rest.
> Basically the use cases I care about are:
> Install to a specific path and/or filename
> Install to an existing partition
> Install to a new partition in an existing LVM volume group.
> As koan needed to do this before the storage stuff (IIRC) I have code in 
> koan to manage LVM.    I'll need to keep it around for support of RHEL 
> 5.older and F8-previous, so if the new stuff works relatively the same 
> that would be great.
> Basically if I can pass in a path or LVM volume group name, I'm happy.   
> Needing to grok any XML would make me unhappy :)

There won't be any need to mess with xml here.


>> The next piece is how the interface changes for virt-install.
>> Here are the storage use cases we now have:
>> 1) use existing non-managed (local) disk
>>    - signified by --file /some/real/path
>> 2) create non-managed (local) disk
>>    - signified by --file /some/real/dir/idontexist
> What is "managed vs unmanaged" here?

Managed = Libvirt storage APIs. The libvirt storage APIs are how
we know what exists on remote systems, and how we tell remote
systems to create this file with this format, or that partition
with that size, etc.

The 'pool' and 'volume' terminology is all part of this.


The gist of it is:

A 'pool' is some resource that can be carved up into units to be
  used directly by VMs. Pool types are a directory, nfs mount, 
  filesystem mount (all carved into flat files), lvm volgroup,
  raw disk devices (carved into smaller blk devs), and iscsi
  (which creation isn't supported on).

A 'volume' is the carved up unit, directly usable as storage for
  a VM.

All this remote guest creation stuff won't 'just work' if the user
passes the correct parameters, the remote host will have to be
configured in advance to teach libvirt about what storage is
available. This could either be done on the command line using
virsh pool-create-as, or use virt-manager and use wizards to
do all this fun stuff (not posted yet. 95% completed and
working, just hasn't been polished up, and it's dependent on
some not committed virtinst work).

We should probably have libvirt set up a default storage
pool for /var/lib/libvirt/images so that there would be
a typical out of the box option for users.

- Cole

import virtinst
from virtinst import VirtualDisk as vd
from virtinst.Storage import StoragePool as sp
from virtinst.Storage import StorageVolume as sv

import logging
import sys
import libvirt

# Set debug logging to print
root_logger = logging.getLogger()
streamHandler = logging.StreamHandler(sys.stderr)

LOCAL_CONN  = "qemu:///system"
REMOTE_CONN = "qemu+ssh://localhost/system"

POOL = "default"
VOL  = "test.img"


print "open conn"
localconn = libvirt.open(LOCAL_CONN)
print "get pool"
pool = localconn.storagePoolLookupByName(POOL)
print "get vol"
vol = pool.storageVolLookupByName(VOL)

print "get pooltype"
pooltype = virtinst.util.get_xml_path(pool.XMLDesc(0), "/pool/@type")
print "get volclass"
volclass = sp.get_volume_for_pool(pooltype)

print "create volclass instance"
volinst = volclass(name="testguest", pool=pool, capacity=GOODSIZE)

def check_disk(disk):

    print "\nis_conflict_disk:"
    print d.is_conflict_disk(localconn)

    print "\nis_size_conflict:"
    print d.is_size_conflict()

    print "\nget_xml_config()"
    print d.get_xml_config("hda")
    print "\n"

print "\n\nCreating volobj disk:"
d = vd(volobj=vol)

print "\n\nCreating volinst disk:"
d = vd(volinstall=volinst)

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]