[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] Proposal to add iSCSI support to esx storage driver



2012/8/5 Ata Bohra <ata husain hotmail com>:
>> Date: Sun, 5 Aug 2012 23:04:07 +0200
>> Subject: Re: [libvirt] Proposal to add iSCSI support to esx storage driver
>> From: matthias bolte googlemail com
>> To: ata husain hotmail com
>> CC: libvir-list redhat com
>
>>
>> 2012/8/2 Ata Bohra <ata husain hotmail com>:
>> > Hi All,
>> >
>> > I just want to go over the design that I am working on to incorporate
>> > iSCSI
>> > support to libvirt ESX storage driver. The highlights are:
>> >
>> > Current Implementation
>> > At present esx_storage_driver supports only VMFS type datastore and does
>> > not
>> > provide much leeway to enhance or add other supported storage pools such
>> > as
>> > iSCSI.
>> >
>> > Proposal
>> > My proposal is:
>> > 1. Split the current code such as esx_storage_driver becomes more like a
>> > facade; this driver will use "backend" drivers to perform the request
>> > task
>> > (such as: esx_storage_backend_iscsi and esx_storage_backend_vmfs)
>> > 2. Based on the pool type (lookup can determine storage pool type), the
>> > base
>> > driver then invoke the appropriate backend driver routine to get the job
>> > done.
>> > 3. Backend driver shall implement same routines exposed by
>> > esx_storage_driver if needed, but the implementation will be pertinent
>> > to
>> > its specific type.
>>
>> I took a quick look at the vSphere API regarding iSCSI but I'm not
>> sure how it's supposed to work. Do you have a better understanding
>> about this. I'd like to discuss the conceptual part first. How does
>> storage pool and volume listing/creation/destruction work with iSCSI?
>> Does it differ from the current code at all? If it differs is it that
>> different that we really need this radical split?
>>
>> --
>> Matthias Bolte
>> http://photron.blogspot.com
>
> Hi Matthias,
>
> Below is my understanding as per the iSCSI operations mapping of vSphere
> APIs and libvirt.
>
> Storage Pool  <---> iSCSI target (as ESX provides set of static targets as
> well dynamic target, I am targetting list of only static targets as they
> gaurantee LUN exposed on that IQN and covers corresponding dynamic target
> too)
>
> Volumes <----> Logical Units Number exposed to that host on that IQN.
>
> Above listed mapping are real important for me to get right, please let me
> know if you think they does not map well. ( I have based my understanding as
> per brief discussion mentioned at http://libvirt.org/storage.html)
>
> As iSCSI and VMFS (encapsulating all ESX supported datastores) operation
> differ significantly such as:
> 1. iSCSI volumes can be listed but cannot be created/destroyed.
> 2. iSCSI ESX data objects have no similarity with datastore type storage
> dataobjects (for iSCSI they are: HostScsiTopology and ScsiLun; it would be
> useful to share the complete mapping if you are interested, please let me
> know).
>
> The current esx_storage_driver.c is written solely for pool/volumes that
> support VMFS datastore operations, BUT a subset of operation can be provided
> for iSCSI storage pool/volume. It is possible to append the current code to
> support iSCSI operation but I think it clutter the code.
> With that intention I proposed to split the pool specific implementation to
> backend drivers and esx libvirt storage interface driver simply delegates
> task to the backend driver.

This sounds good so far. Some remaining questions:

A storage pool as a name and a UUID. Do you already know where to get
this information for a iSCSI target? For example for the existing
datastore handling I had to use the MD5 sum of it's mount path as
UUID.

Does ESX use the same naming scheme for iSCSI and the other
datastores: '[datastore-name] path/to/volume/in/datastore.vmdk', that
maps to /path/to/datastore/path/to/volume/in/datastore.vmdk in the VMX
file entry? Or does it use a full qualified URL that is also used in
the VMX entry? If they use different formats it would allow some
storage driver functions to directly distinguish between different
storage pool types and directly call into the correct backend without
prior probing.

Okay, I think the next step is that you start to implement your scheme
and see how it works out.

-- 
Matthias Bolte
http://photron.blogspot.com


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]