[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] [PATCH 4/4] storage: fix uid_t|gid_t handling on 32 bit Linux


Am Montag 25 Februar 2013, 15:58:50 schrieb Michal Privoznik:
> On 22.02.2013 17:55, Philipp Hahn wrote:
> > uid_t and gid_t are opaque types, ranging from s32 to u32 to u64.
> This one breaks storagevolxml2xmltest:
> TEST: storagevolxml2xmltest
>     ... OK 7) Storage Vol XML-2-XML vol-sheepdog                          
>      ... Offset 283
> Expect [4294967295</owner>
>       <group>4294967295]
> Actual [-1</owner>
>       <group>-1]
>                                                                       ...

The attached 1st patch would change the test case, but since I don't know 
anything about sheepdog, I can't say it that is really the correct change.

@Sebastian: Is (uid_t)-1 = (u32)-1 special for Sheepdog or was the file just 
created by a previous virsh-dump, which outputs UINT_MAX instead of -1?

There is one more interesting case in storagepoolxml2xmlout/pool-dir.xml, 
which should currently fail on 32 bit compiles, but does not, because pool-
dumpxml only returns the original pool-xml with user/group=-1 (and updates 

> However, the first 3 patches are correct. Even though it's freeze, they are
> pure bugfixes so I've pushed them.


On Monday 25 February 2013 19:46:50 Eric Blake wrote:
>I like the standardized name INT_MAX better than the invented name ALLONE.

Better, but not correct: UINT_MAX is what is required here, because INT_MAX is 
the signed one with MSB=0.

>> +    double d;
>Eww - why do we need to use a double?

Ask libxml2/libxml/xpath.h:
>struct _xmlXPathObject {
>    xmlXPathObjectType type;
>    xmlNodeSetPtr nodesetval;
>    int boolval;
>    double floatval;
>    xmlChar *stringval;
>    void *user;
>    int index;
>    void *user2;
>    int index2;
The xpath expression "Number(./owner)" must work for integer and floating point 
values, so a double is returned by "Number()".
If you take a look at virXPathLongBase(), you'll see no such "double", because 
the return value of "floatval" is immediately cased to a "long". The now used 
virXPathNumber() returns a "double".

>I like the idea of outputting -1 rather than the unsigned all-ones
>value; which means you need to respin this patch to avoid testsuite
>failures where we change the test output.

See attached patch.

>Also, anywhere that the
>parser doesn't accept -1, we need to fix that; accepting -1 as the only
>valid negative value and requiring all other values to be non-negative
>makes sense.

I had a look at the relax-ng schema: schemas/storagepool.rng accepts the -1, 
scheams/storagevol.rng does not.

On the other hand the "-1" seems not to be documented on 
<http://libvirt.org/formatstorage.html> (btw: out of date, see 2. patch)

The more I think about the -1 problem, the more I'm getting confused: in 
virStorageBackendFileSystemBuild() there is some code which reads back the 
actual owner and group, but my /etc/libvirt/storage/default.xml still contains 
the -1 / 4294967295, which also seems to be returned by "virsh pool-dumpxml 

Personally I would find the following semantics the least confusing:
1. on define, -1 stands for "the current gid / pid" for the future time, when 
the pool is started.
2. on create/start, -1 would be my gid/pid for qemu:///session and  
root:libvirt (or whatever) for qemu:///system .
3. on read back of an active pool, the current uid/gid are returned, so never 
-1. (the target exists, so the current values could be returned; a -1 is not 
very useful for the user here, IMHO.)
4. on read back of an passive pool, the original uid/gid (including -1) should 
be returned. (the target might not exist, so libvirt can only show the 
intended values)

I would like to get that clarified before spending more time on a proper v2, 
since I now have to switch to a different task requiring my immediate 

>>  - Some regression test?
>Did you run 'make check' on your patch?  We already have XML output that
>is affected by the change in output format, and can write the test so
>that we prove that multiple ways of formatting input all get
>canonicalized to the same output.

No, forgot to do that.

Thank you for your review and comments. Much appreciated.

Philipp Hahn           Open Source Software Engineer      hahn univention de
Univention GmbH        be open.                       fon: +49 421 22 232- 0
Mary-Somerville-Str.1  D-28359 Bremen                 fax: +49 421 22 232-99
diff --git a/docs/storage.html.in b/docs/storage.html.in
index 26a949a..983568c 100644
--- a/docs/storage.html.in
+++ b/docs/storage.html.in
@@ -115,6 +115,7 @@
     <h2><a name="StorageBackendDir">Directory pool</a></h2>
       A pool with a type of <code>dir</code> provides the means to manage
@@ -153,6 +154,7 @@
       <li><code>iso</code>: CDROM disk image format</li>
       <li><code>qcow</code>: QEMU v1 disk image format</li>
       <li><code>qcow2</code>: QEMU v2 disk image format</li>
+      <li><code>qed</code>: QEMU Enhanced Disk disk image format</li>
       <li><code>vmdk</code>: VMWare disk image format</li>
       <li><code>vpc</code>: VirtualPC disk image format</li>
@@ -164,7 +166,6 @@
       either <code>qemu-img</code> or <code>qcow-create</code> tools
       are present. The others are dependent on support of the
       <code>qemu-img</code> tool.
     <h2><a name="StorageBackendFS">Filesystem pool</a></h2>
@@ -184,6 +185,7 @@
           &lt;device path="/dev/VolGroup00/VirtImages"/&gt;
+          &lt;format type="auto"/&gt;
@@ -254,6 +256,7 @@
           &lt;host name="nfs.example.com"/&gt;
           &lt;dir path="/var/lib/virt/images"/&gt;
+          &lt;format type="auto"/&gt;
@@ -328,6 +331,7 @@
           &lt;device path='/dev/sda'/&gt;
+          &lt;format type='gpt'/&gt;
@@ -361,6 +365,9 @@
+      <li>
+        <code>lvm2</code>
+      </li>
       The <code>dos</code> or <code>gpt</code> formats are recommended for
@@ -433,6 +440,7 @@
       The iSCSI volume pool does not use the volume format type element.
     <h2><a name="StorageBackendSCSI">SCSI volume pools</a></h2>
       This provides a pool based on a SCSI HBA. Volumes are preexisting SCSI
@@ -465,6 +473,7 @@
       The SCSI volume pool does not use the volume format type element.
     <h2><a name="StorageBackendMultipath">Multipath pools</a></h2>
       This provides a pool that contains all the multipath devices on the
@@ -497,18 +506,19 @@
       The Multipath volume pool does not use the volume format type element.
     <h2><a name="StorageBackendRBD">RBD pools</a></h2>
       This storage driver provides a pool which contains all RBD
       images in a RADOS pool.  RBD (RADOS Block Device) is part
       of the Ceph distributed storage project.<br/>
-      This backend <i>only</i> supports Qemu with RBD support. Kernel RBD
+      This backend <i>only</i> supports QEMU with RBD support. Kernel RBD
       which exposes RBD devices as block devices in /dev is <i>not</i>
       supported. RBD images created with this storage backend
       can be accessed through kernel RBD if configured manually, but
       this backend does not provide mapping for these images.<br/>
-      Images created with this backend can be attached to Qemu guests
-      when Qemu is build with RBD support (Since Qemu 0.14.0). The
+      Images created with this backend can be attached to QEMU guests
+      when QEMU is build with RBD support (Since QEMU 0.14.0). The
       backend supports cephx authentication for communication with the
       Ceph cluster. Storing the cephx authentication key is done with
       the libvirt secret mechanism. The UUID in the example pool input
@@ -552,7 +562,7 @@
     <h3>Example disk attachement</h3>
-    <p>RBD images can be attached to Qemu guests when Qemu is built
+    <p>RBD images can be attached to QEMU guests when QEMU is built
     with RBD support. Information about attaching a RBD image to a
     guest can be found
     at <a href="formatdomain.html#elementsDisks">format domain</a>
@@ -568,6 +578,7 @@
       The RBD pool does not use the volume format type element.
     <h2><a name="StorageBackendSheepdog">Sheepdog pools</a></h2>
       This provides a pool based on a Sheepdog Cluster.
@@ -611,7 +622,7 @@
     <h3>Example disk attachment</h3>
-    <p>Sheepdog images can be attached to Qemu guests.
+    <p>Sheepdog images can be attached to QEMU guests.
     Information about attaching a Sheepdog image to a
     guest can be found
     at the <a href="formatdomain.html#elementsDisks">format domain</a>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]