[Date Prev][Date Next] [Thread Prev][Thread Next]
Re: [libvirt] [PATCH v3] Add support for Veritas HyperScale (VxHS) block device protocol
- From: John Ferlan <jferlan redhat com>
- To: Peter Krempa <pkrempa redhat com>
- Cc: Venkatesha Mg veritas com, ashish mittal veritas com, Pranav Sarwate veritas com, Nitin Jerath veritas com, libvir-list redhat com, Abhishek Kane veritas com, Suraj Singh veritas com, Ketan Nilangekar veritas com, Abhijit Dey veritas com, pbonzini redhat com, Buddhi Madhav veritas com, Ashish Mittal <ashmit602 gmail com>
- Subject: Re: [libvirt] [PATCH v3] Add support for Veritas HyperScale (VxHS) block device protocol
- Date: Wed, 1 Feb 2017 11:13:16 -0500
On 02/01/2017 04:39 AM, Peter Krempa wrote:
> On Wed, Jan 25, 2017 at 10:59:59 -0500, John Ferlan wrote:
>> On 01/19/2017 09:21 PM, Ashish Mittal wrote:
>>> Sample XML for a vxhs vdisk is as follows:
>>> <disk type='network' device='disk'>
>>> <driver name='qemu' type='raw' cache='none'/>
>>> <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
>>> <host name='192.168.0.1' port='9999'/>
>>> <target dev='vda' bus='virtio'/>
>>> <alias name='virtio-disk0'/>
>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
>> It's still not really clear how someone knows to use the name string.
>> IOW: How would someone know what to supply. Perhaps the libvirt wiki
>> (http://wiki.libvirt.org/page/Main_Page) as opposed to the libvirt docs.
>> I assume that someone using VxHS knows how to get that, but still I find
>> it a "good thing" to be able to have that description somewhere as I can
>> only assume some day it'll come up.
>> But since we don't "require" this for iSCSI, Ceph/RBD, Gluster, etc.
>> it's not something 'required' for this patch. Still something for
>> someone's todo list to get a description on the libvirt wiki. Once you
>> have wiki write access, then you can always keep that data up to date
>> without requiring libvirt patches.
>>> Signed-off-by: Ashish Mittal <Ashish Mittal veritas com>
>>> v2 changelog:
>>> (1) Added code for JSON parsing of a VxHS vdisk.
>>> (2) Added test case to verify JSON parsing.
>>> (3) Added missing switch-case checks for VIR_STORAGE_NET_PROTOCOL_VXHS.
>>> (4) Fixed line wrap in qemuxml2argv-disk-drive-network-vxhs.args.
>>> v3 changelog:
>>> (1) Implemented the modern syntax for VxHS disk specification.
>>> (2) Changed qemuxml2argvdata VxHS test case to verify the new syntax.
>>> (3) Added a negative test case to check failure when multiple hosts
>>> are specified for a VxHS disk.
>>> docs/formatdomain.html.in | 15 ++++-
>>> docs/schemas/domaincommon.rng | 1 +
>>> src/libxl/libxl_conf.c | 1 +
>>> src/qemu/qemu_command.c | 68 ++++++++++++++++++++++
>>> src/qemu/qemu_driver.c | 3 +
>>> src/qemu/qemu_parse_command.c | 26 +++++++++
>>> src/util/virstoragefile.c | 63 +++++++++++++++++++-
>>> src/util/virstoragefile.h | 1 +
>>> src/xenconfig/xen_xl.c | 1 +
>>> ...xml2argv-disk-drive-network-vxhs-multi-host.xml | 35 +++++++++++
>>> .../qemuxml2argv-disk-drive-network-vxhs.args | 25 ++++++++
>>> .../qemuxml2argv-disk-drive-network-vxhs.xml | 34 +++++++++++
>>> tests/qemuxml2argvtest.c | 2 +
>>> tests/virstoragetest.c | 16 +++++
>>> 14 files changed, 287 insertions(+), 4 deletions(-)
>>> create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs-multi-host.xml
>>> create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
>>> create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
>> In general things are looking much better - a couple of nits below and 2
>> things to consider/rework...
>> #1. Based on Peter's v2 comments, we don't want to support the
>> older/legacy syntax for VxHS, so it's something that should be removed -
>> although we should check for it being present and fail if found.
>> #2. Is the desire to ever support more than 1 host? If not, then is the
>> "server" syntax you've borrowed from the Gluster code necessary? Could
>> you just go with the single "host" like NBD and SSH. As it relates to
>> the qemu command line - I'm not quite as clear. From the example I see
>> in commit id '7b7da9e28', the gluster syntax would have:
>> whereas, the VxHS syntax is:
> I think that all protocols while using the JSON or
> JSON-converted-to-commandline syntax should use exactly the same syntax
> so that we don't have to do custom generators for every single storage
> transport somebody invents.
Primarily so it's clear to all...
There's virStorageSourceParseBackingJSONGluster[Host] helpers which
generate the file.server.# specifically for Gluster.
So are you advocating making the *Host helper be made more generic to be
shared with the virStorageSourceParseBackingJSONVXHS code? Might be
painful depending on what the driver supports for network protocols.
Or is this more you'd prefer to see the qemu_command code use the
"a:servers" syntax for all protocols leaving it up to the util code to
create the command line.
Having 'file.server.host' syntax isn't a problem per se with/for the
VxHS backend - I was noting it didn't seem like it had to be done.
It does seem that there are two models - one a multi host server model
and the other a single host model that could have generic code. Beyond
that is the transport differentiation... I would think that the two
existing checks in the *JSONGlusterHost for "!server" and "!socket"
would either need to be made generic (removing the gluster in the error
message) or they'd need to be made in the caller instead. That would
allow each backend parser to decide what it supports (or not).
[Date Prev][Date Next] [Thread Prev][Thread Next]